Science.gov

Sample records for accurately predict critical

  1. Predictability of critical transitions

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaozhu; Kuehn, Christian; Hallerberg, Sarah

    2015-11-01

    Critical transitions in multistable systems have been discussed as models for a variety of phenomena ranging from the extinctions of species to socioeconomic changes and climate transitions between ice ages and warm ages. From bifurcation theory we can expect certain critical transitions to be preceded by a decreased recovery from external perturbations. The consequences of this critical slowing down have been observed as an increase in variance and autocorrelation prior to the transition. However, especially in the presence of noise, it is not clear whether these changes in observation variables are statistically relevant such that they could be used as indicators for critical transitions. In this contribution we investigate the predictability of critical transitions in conceptual models. We study the quadratic integrate-and-fire model and the van der Pol model under the influence of external noise. We focus especially on the statistical analysis of the success of predictions and the overall predictability of the system. The performance of different indicator variables turns out to be dependent on the specific model under study and the conditions of accessing it. Furthermore, we study the influence of the magnitude of transitions on the predictive performance.

  2. Hounsfield unit density accurately predicts ESWL success.

    PubMed

    Magnuson, William J; Tomera, Kevin M; Lance, Raymond S

    2005-01-01

    Extracorporeal shockwave lithotripsy (ESWL) is a commonly used non-invasive treatment for urolithiasis. Helical CT scans provide much better and detailed imaging of the patient with urolithiasis including the ability to measure density of urinary stones. In this study we tested the hypothesis that density of urinary calculi as measured by CT can predict successful ESWL treatment. 198 patients were treated at Alaska Urological Associates with ESWL between January 2002 and April 2004. Of these 101 met study inclusion with accessible CT scans and stones ranging from 5-15 mm. Follow-up imaging demonstrated stone freedom in 74.2%. The overall mean Houndsfield density value for stone-free compared to residual stone groups were significantly different ( 93.61 vs 122.80 p < 0.0001). We determined by receiver operator curve (ROC) that HDV of 93 or less carries a 90% or better chance of stone freedom following ESWL for upper tract calculi between 5-15mm.

  3. On the Accurate Prediction of CME Arrival At the Earth

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Hess, Phillip

    2016-07-01

    We will discuss relevant issues regarding the accurate prediction of CME arrival at the Earth, from both observational and theoretical points of view. In particular, we clarify the importance of separating the study of CME ejecta from the ejecta-driven shock in interplanetary CMEs (ICMEs). For a number of CME-ICME events well observed by SOHO/LASCO, STEREO-A and STEREO-B, we carry out the 3-D measurements by superimposing geometries onto both the ejecta and sheath separately. These measurements are then used to constrain a Drag-Based Model, which is improved through a modification of including height dependence of the drag coefficient into the model. Combining all these factors allows us to create predictions for both fronts at 1 AU and compare with actual in-situ observations. We show an ability to predict the sheath arrival with an average error of under 4 hours, with an RMS error of about 1.5 hours. For the CME ejecta, the error is less than two hours with an RMS error within an hour. Through using the best observations of CMEs, we show the power of our method in accurately predicting CME arrival times. The limitation and implications of our accurate prediction method will be discussed.

  4. Passive samplers accurately predict PAH levels in resident crayfish.

    PubMed

    Paulik, L Blair; Smith, Brian W; Bergmann, Alan J; Sower, Greg J; Forsberg, Norman D; Teeguarden, Justin G; Anderson, Kim A

    2016-02-15

    Contamination of resident aquatic organisms is a major concern for environmental risk assessors. However, collecting organisms to estimate risk is often prohibitively time and resource-intensive. Passive sampling accurately estimates resident organism contamination, and it saves time and resources. This study used low density polyethylene (LDPE) passive water samplers to predict polycyclic aromatic hydrocarbon (PAH) levels in signal crayfish, Pacifastacus leniusculus. Resident crayfish were collected at 5 sites within and outside of the Portland Harbor Superfund Megasite (PHSM) in the Willamette River in Portland, Oregon. LDPE deployment was spatially and temporally paired with crayfish collection. Crayfish visceral and tail tissue, as well as water-deployed LDPE, were extracted and analyzed for 62 PAHs using GC-MS/MS. Freely-dissolved concentrations (Cfree) of PAHs in water were calculated from concentrations in LDPE. Carcinogenic risks were estimated for all crayfish tissues, using benzo[a]pyrene equivalent concentrations (BaPeq). ∑PAH were 5-20 times higher in viscera than in tails, and ∑BaPeq were 6-70 times higher in viscera than in tails. Eating only tail tissue of crayfish would therefore significantly reduce carcinogenic risk compared to also eating viscera. Additionally, PAH levels in crayfish were compared to levels in crayfish collected 10 years earlier. PAH levels in crayfish were higher upriver of the PHSM and unchanged within the PHSM after the 10-year period. Finally, a linear regression model predicted levels of 34 PAHs in crayfish viscera with an associated R-squared value of 0.52 (and a correlation coefficient of 0.72), using only the Cfree PAHs in water. On average, the model predicted PAH concentrations in crayfish tissue within a factor of 2.4 ± 1.8 of measured concentrations. This affirms that passive water sampling accurately estimates PAH contamination in crayfish. Furthermore, the strong predictive ability of this simple model suggests

  5. Plant diversity accurately predicts insect diversity in two tropical landscapes.

    PubMed

    Zhang, Kai; Lin, Siliang; Ji, Yinqiu; Yang, Chenxue; Wang, Xiaoyang; Yang, Chunyan; Wang, Hesheng; Jiang, Haisheng; Harrison, Rhett D; Yu, Douglas W

    2016-09-01

    Plant diversity surely determines arthropod diversity, but only moderate correlations between arthropod and plant species richness had been observed until Basset et al. (Science, 338, 2012 and 1481) finally undertook an unprecedentedly comprehensive sampling of a tropical forest and demonstrated that plant species richness could indeed accurately predict arthropod species richness. We now require a high-throughput pipeline to operationalize this result so that we can (i) test competing explanations for tropical arthropod megadiversity, (ii) improve estimates of global eukaryotic species diversity, and (iii) use plant and arthropod communities as efficient proxies for each other, thus improving the efficiency of conservation planning and of detecting forest degradation and recovery. We therefore applied metabarcoding to Malaise-trap samples across two tropical landscapes in China. We demonstrate that plant species richness can accurately predict arthropod (mostly insect) species richness and that plant and insect community compositions are highly correlated, even in landscapes that are large, heterogeneous and anthropogenically modified. Finally, we review how metabarcoding makes feasible highly replicated tests of the major competing explanations for tropical megadiversity. PMID:27474399

  6. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    DOE PAGES

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; et al

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of chargedmore » peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.« less

  7. Basophile: Accurate Fragment Charge State Prediction Improves Peptide Identification Rates

    SciTech Connect

    Wang, Dong; Dasari, Surendra; Chambers, Matthew C.; Holman, Jerry D.; Chen, Kan; Liebler, Daniel; Orton, Daniel J.; Purvine, Samuel O.; Monroe, Matthew E.; Chung, Chang Y.; Rose, Kristie L.; Tabb, David L.

    2013-03-07

    In shotgun proteomics, database search algorithms rely on fragmentation models to predict fragment ions that should be observed for a given peptide sequence. The most widely used strategy (Naive model) is oversimplified, cleaving all peptide bonds with equal probability to produce fragments of all charges below that of the precursor ion. More accurate models, based on fragmentation simulation, are too computationally intensive for on-the-fly use in database search algorithms. We have created an ordinal-regression-based model called Basophile that takes fragment size and basic residue distribution into account when determining the charge retention during CID/higher-energy collision induced dissociation (HCD) of charged peptides. This model improves the accuracy of predictions by reducing the number of unnecessary fragments that are routinely predicted for highly-charged precursors. Basophile increased the identification rates by 26% (on average) over the Naive model, when analyzing triply-charged precursors from ion trap data. Basophile achieves simplicity and speed by solving the prediction problem with an ordinal regression equation, which can be incorporated into any database search software for shotgun proteomic identification.

  8. Passive samplers accurately predict PAH levels in resident crayfish.

    PubMed

    Paulik, L Blair; Smith, Brian W; Bergmann, Alan J; Sower, Greg J; Forsberg, Norman D; Teeguarden, Justin G; Anderson, Kim A

    2016-02-15

    Contamination of resident aquatic organisms is a major concern for environmental risk assessors. However, collecting organisms to estimate risk is often prohibitively time and resource-intensive. Passive sampling accurately estimates resident organism contamination, and it saves time and resources. This study used low density polyethylene (LDPE) passive water samplers to predict polycyclic aromatic hydrocarbon (PAH) levels in signal crayfish, Pacifastacus leniusculus. Resident crayfish were collected at 5 sites within and outside of the Portland Harbor Superfund Megasite (PHSM) in the Willamette River in Portland, Oregon. LDPE deployment was spatially and temporally paired with crayfish collection. Crayfish visceral and tail tissue, as well as water-deployed LDPE, were extracted and analyzed for 62 PAHs using GC-MS/MS. Freely-dissolved concentrations (Cfree) of PAHs in water were calculated from concentrations in LDPE. Carcinogenic risks were estimated for all crayfish tissues, using benzo[a]pyrene equivalent concentrations (BaPeq). ∑PAH were 5-20 times higher in viscera than in tails, and ∑BaPeq were 6-70 times higher in viscera than in tails. Eating only tail tissue of crayfish would therefore significantly reduce carcinogenic risk compared to also eating viscera. Additionally, PAH levels in crayfish were compared to levels in crayfish collected 10 years earlier. PAH levels in crayfish were higher upriver of the PHSM and unchanged within the PHSM after the 10-year period. Finally, a linear regression model predicted levels of 34 PAHs in crayfish viscera with an associated R-squared value of 0.52 (and a correlation coefficient of 0.72), using only the Cfree PAHs in water. On average, the model predicted PAH concentrations in crayfish tissue within a factor of 2.4 ± 1.8 of measured concentrations. This affirms that passive water sampling accurately estimates PAH contamination in crayfish. Furthermore, the strong predictive ability of this simple model suggests

  9. Mouse models of human AML accurately predict chemotherapy response

    PubMed Central

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.

    2009-01-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691

  10. Fast and accurate predictions of covalent bonds in chemical space.

    PubMed

    Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  11. Fast and accurate predictions of covalent bonds in chemical space

    NASA Astrophysics Data System (ADS)

    Chang, K. Y. Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O. Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (˜1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H 2+ . Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  12. Fast and accurate predictions of covalent bonds in chemical space.

    PubMed

    Chang, K Y Samuel; Fias, Stijn; Ramakrishnan, Raghunathan; von Lilienfeld, O Anatole

    2016-05-01

    We assess the predictive accuracy of perturbation theory based estimates of changes in covalent bonding due to linear alchemical interpolations among molecules. We have investigated σ bonding to hydrogen, as well as σ and π bonding between main-group elements, occurring in small sets of iso-valence-electronic molecules with elements drawn from second to fourth rows in the p-block of the periodic table. Numerical evidence suggests that first order Taylor expansions of covalent bonding potentials can achieve high accuracy if (i) the alchemical interpolation is vertical (fixed geometry), (ii) it involves elements from the third and fourth rows of the periodic table, and (iii) an optimal reference geometry is used. This leads to near linear changes in the bonding potential, resulting in analytical predictions with chemical accuracy (∼1 kcal/mol). Second order estimates deteriorate the prediction. If initial and final molecules differ not only in composition but also in geometry, all estimates become substantially worse, with second order being slightly more accurate than first order. The independent particle approximation based second order perturbation theory performs poorly when compared to the coupled perturbed or finite difference approach. Taylor series expansions up to fourth order of the potential energy curve of highly symmetric systems indicate a finite radius of convergence, as illustrated for the alchemical stretching of H2 (+). Results are presented for (i) covalent bonds to hydrogen in 12 molecules with 8 valence electrons (CH4, NH3, H2O, HF, SiH4, PH3, H2S, HCl, GeH4, AsH3, H2Se, HBr); (ii) main-group single bonds in 9 molecules with 14 valence electrons (CH3F, CH3Cl, CH3Br, SiH3F, SiH3Cl, SiH3Br, GeH3F, GeH3Cl, GeH3Br); (iii) main-group double bonds in 9 molecules with 12 valence electrons (CH2O, CH2S, CH2Se, SiH2O, SiH2S, SiH2Se, GeH2O, GeH2S, GeH2Se); (iv) main-group triple bonds in 9 molecules with 10 valence electrons (HCN, HCP, HCAs, HSiN, HSi

  13. An Overview of Practical Applications of Protein Disorder Prediction and Drive for Faster, More Accurate Predictions

    PubMed Central

    Deng, Xin; Gumm, Jordan; Karki, Suman; Eickholt, Jesse; Cheng, Jianlin

    2015-01-01

    Protein disordered regions are segments of a protein chain that do not adopt a stable structure. Thus far, a variety of protein disorder prediction methods have been developed and have been widely used, not only in traditional bioinformatics domains, including protein structure prediction, protein structure determination and function annotation, but also in many other biomedical fields. The relationship between intrinsically-disordered proteins and some human diseases has played a significant role in disorder prediction in disease identification and epidemiological investigations. Disordered proteins can also serve as potential targets for drug discovery with an emphasis on the disordered-to-ordered transition in the disordered binding regions, and this has led to substantial research in drug discovery or design based on protein disordered region prediction. Furthermore, protein disorder prediction has also been applied to healthcare by predicting the disease risk of mutations in patients and studying the mechanistic basis of diseases. As the applications of disorder prediction increase, so too does the need to make quick and accurate predictions. To fill this need, we also present a new approach to predict protein residue disorder using wide sequence windows that is applicable on the genomic scale. PMID:26198229

  14. Differential contribution of visual and auditory information to accurately predict the direction and rotational motion of a visual stimulus.

    PubMed

    Park, Seoung Hoon; Kim, Seonjin; Kwon, MinHyuk; Christou, Evangelos A

    2016-03-01

    Vision and auditory information are critical for perception and to enhance the ability of an individual to respond accurately to a stimulus. However, it is unknown whether visual and auditory information contribute differentially to identify the direction and rotational motion of the stimulus. The purpose of this study was to determine the ability of an individual to accurately predict the direction and rotational motion of the stimulus based on visual and auditory information. In this study, we recruited 9 expert table-tennis players and used table-tennis service as our experimental model. Participants watched recorded services with different levels of visual and auditory information. The goal was to anticipate the direction of the service (left or right) and the rotational motion of service (topspin, sidespin, or cut). We recorded their responses and quantified the following outcomes: (i) directional accuracy and (ii) rotational motion accuracy. The response accuracy was the accurate predictions relative to the total number of trials. The ability of the participants to predict the direction of the service accurately increased with additional visual information but not with auditory information. In contrast, the ability of the participants to predict the rotational motion of the service accurately increased with the addition of auditory information to visual information but not with additional visual information alone. In conclusion, this finding demonstrates that visual information enhances the ability of an individual to accurately predict the direction of the stimulus, whereas additional auditory information enhances the ability of an individual to accurately predict the rotational motion of stimulus.

  15. PredictSNP: robust and accurate consensus classifier for prediction of disease-related mutations.

    PubMed

    Bendl, Jaroslav; Stourac, Jan; Salanda, Ondrej; Pavelka, Antonin; Wieben, Eric D; Zendulka, Jaroslav; Brezovsky, Jan; Damborsky, Jiri

    2014-01-01

    Single nucleotide variants represent a prevalent form of genetic variation. Mutations in the coding regions are frequently associated with the development of various genetic diseases. Computational tools for the prediction of the effects of mutations on protein function are very important for analysis of single nucleotide variants and their prioritization for experimental characterization. Many computational tools are already widely employed for this purpose. Unfortunately, their comparison and further improvement is hindered by large overlaps between the training datasets and benchmark datasets, which lead to biased and overly optimistic reported performances. In this study, we have constructed three independent datasets by removing all duplicities, inconsistencies and mutations previously used in the training of evaluated tools. The benchmark dataset containing over 43,000 mutations was employed for the unbiased evaluation of eight established prediction tools: MAPP, nsSNPAnalyzer, PANTHER, PhD-SNP, PolyPhen-1, PolyPhen-2, SIFT and SNAP. The six best performing tools were combined into a consensus classifier PredictSNP, resulting into significantly improved prediction performance, and at the same time returned results for all mutations, confirming that consensus prediction represents an accurate and robust alternative to the predictions delivered by individual tools. A user-friendly web interface enables easy access to all eight prediction tools, the consensus classifier PredictSNP and annotations from the Protein Mutant Database and the UniProt database. The web server and the datasets are freely available to the academic community at http://loschmidt.chemi.muni.cz/predictsnp.

  16. Accurately Predicting Complex Reaction Kinetics from First Principles

    NASA Astrophysics Data System (ADS)

    Green, William

    Many important systems contain a multitude of reactive chemical species, some of which react on a timescale faster than collisional thermalization, i.e. they never achieve a Boltzmann energy distribution. Usually it is impossible to fully elucidate the processes by experiments alone. Here we report recent progress toward predicting the time-evolving composition of these systems a priori: how unexpected reactions can be discovered on the computer, how reaction rates are computed from first principles, and how the many individual reactions are efficiently combined into a predictive simulation for the whole system. Some experimental tests of the a priori predictions are also presented.

  17. Does more accurate exposure prediction necessarily improve health effect estimates?

    PubMed

    Szpiro, Adam A; Paciorek, Christopher J; Sheppard, Lianne

    2011-09-01

    A unique challenge in air pollution cohort studies and similar applications in environmental epidemiology is that exposure is not measured directly at subjects' locations. Instead, pollution data from monitoring stations at some distance from the study subjects are used to predict exposures, and these predicted exposures are used to estimate the health effect parameter of interest. It is usually assumed that minimizing the error in predicting the true exposure will improve health effect estimation. We show in a simulation study that this is not always the case. We interpret our results in light of recently developed statistical theory for measurement error, and we discuss implications for the design and analysis of epidemiologic research.

  18. Is Three-Dimensional Soft Tissue Prediction by Software Accurate?

    PubMed

    Nam, Ki-Uk; Hong, Jongrak

    2015-11-01

    The authors assessed whether virtual surgery, performed with a soft tissue prediction program, could correctly simulate the actual surgical outcome, focusing on soft tissue movement. Preoperative and postoperative computed tomography (CT) data for 29 patients, who had undergone orthognathic surgery, were obtained and analyzed using the Simplant Pro software. The program made a predicted soft tissue image (A) based on presurgical CT data. After the operation, we obtained actual postoperative CT data and an actual soft tissue image (B) was generated. Finally, the 2 images (A and B) were superimposed and analyzed differences between the A and B. Results were grouped in 2 classes: absolute values and vector values. In the absolute values, the left mouth corner was the most significant error point (2.36 mm). The right mouth corner (2.28 mm), labrale inferius (2.08 mm), and the pogonion (2.03 mm) also had significant errors. In vector values, prediction of the right-left side had a left-sided tendency, the superior-inferior had a superior tendency, and the anterior-posterior showed an anterior tendency. As a result, with this program, the position of points tended to be located more left, anterior, and superior than the "real" situation. There is a need to improve the prediction accuracy for soft tissue images. Such software is particularly valuable in predicting craniofacial soft tissues landmarks, such as the pronasale. With this software, landmark positions were most inaccurate in terms of anterior-posterior predictions.

  19. Towards Accurate Ab Initio Predictions of the Spectrum of Methane

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Kwak, Dochan (Technical Monitor)

    2001-01-01

    We have carried out extensive ab initio calculations of the electronic structure of methane, and these results are used to compute vibrational energy levels. We include basis set extrapolations, core-valence correlation, relativistic effects, and Born- Oppenheimer breakdown terms in our calculations. Our ab initio predictions of the lowest lying levels are superb.

  20. Machine Learning Predictions of Molecular Properties: Accurate Many-Body Potentials and Nonlocality in Chemical Space.

    PubMed

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; von Lilienfeld, O Anatole; Müller, Klaus-Robert; Tkatchenko, Alexandre

    2015-06-18

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.

  1. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    SciTech Connect

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; von Lilienfeld, O. Anatole; Müller, Klaus -Robert; Tkatchenko, Alexandre

    2015-06-04

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.

  2. Machine learning predictions of molecular properties: Accurate many-body potentials and nonlocality in chemical space

    DOE PAGES

    Hansen, Katja; Biegler, Franziska; Ramakrishnan, Raghunathan; Pronobis, Wiktor; von Lilienfeld, O. Anatole; Müller, Klaus -Robert; Tkatchenko, Alexandre

    2015-06-04

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstratemore » prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. The same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies.« less

  3. Machine Learning Predictions of Molecular Properties: Accurate Many-Body Potentials and Nonlocality in Chemical Space

    PubMed Central

    2015-01-01

    Simultaneously accurate and efficient prediction of molecular properties throughout chemical compound space is a critical ingredient toward rational compound design in chemical and pharmaceutical industries. Aiming toward this goal, we develop and apply a systematic hierarchy of efficient empirical methods to estimate atomization and total energies of molecules. These methods range from a simple sum over atoms, to addition of bond energies, to pairwise interatomic force fields, reaching to the more sophisticated machine learning approaches that are capable of describing collective interactions between many atoms or bonds. In the case of equilibrium molecular geometries, even simple pairwise force fields demonstrate prediction accuracy comparable to benchmark energies calculated using density functional theory with hybrid exchange-correlation functionals; however, accounting for the collective many-body interactions proves to be essential for approaching the “holy grail” of chemical accuracy of 1 kcal/mol for both equilibrium and out-of-equilibrium geometries. This remarkable accuracy is achieved by a vectorized representation of molecules (so-called Bag of Bonds model) that exhibits strong nonlocality in chemical space. In addition, the same representation allows us to predict accurate electronic properties of molecules, such as their polarizability and molecular frontier orbital energies. PMID:26113956

  4. Standardized EEG interpretation accurately predicts prognosis after cardiac arrest

    PubMed Central

    Rossetti, Andrea O.; van Rootselaar, Anne-Fleur; Wesenberg Kjaer, Troels; Horn, Janneke; Ullén, Susann; Friberg, Hans; Nielsen, Niklas; Rosén, Ingmar; Åneman, Anders; Erlinge, David; Gasche, Yvan; Hassager, Christian; Hovdenes, Jan; Kjaergaard, Jesper; Kuiper, Michael; Pellis, Tommaso; Stammet, Pascal; Wanscher, Michael; Wetterslev, Jørn; Wise, Matt P.; Cronberg, Tobias

    2016-01-01

    Objective: To identify reliable predictors of outcome in comatose patients after cardiac arrest using a single routine EEG and standardized interpretation according to the terminology proposed by the American Clinical Neurophysiology Society. Methods: In this cohort study, 4 EEG specialists, blinded to outcome, evaluated prospectively recorded EEGs in the Target Temperature Management trial (TTM trial) that randomized patients to 33°C vs 36°C. Routine EEG was performed in patients still comatose after rewarming. EEGs were classified into highly malignant (suppression, suppression with periodic discharges, burst-suppression), malignant (periodic or rhythmic patterns, pathological or nonreactive background), and benign EEG (absence of malignant features). Poor outcome was defined as best Cerebral Performance Category score 3–5 until 180 days. Results: Eight TTM sites randomized 202 patients. EEGs were recorded in 103 patients at a median 77 hours after cardiac arrest; 37% had a highly malignant EEG and all had a poor outcome (specificity 100%, sensitivity 50%). Any malignant EEG feature had a low specificity to predict poor prognosis (48%) but if 2 malignant EEG features were present specificity increased to 96% (p < 0.001). Specificity and sensitivity were not significantly affected by targeted temperature or sedation. A benign EEG was found in 1% of the patients with a poor outcome. Conclusions: Highly malignant EEG after rewarming reliably predicted poor outcome in half of patients without false predictions. An isolated finding of a single malignant feature did not predict poor outcome whereas a benign EEG was highly predictive of a good outcome. PMID:26865516

  5. PredictSNP: Robust and Accurate Consensus Classifier for Prediction of Disease-Related Mutations

    PubMed Central

    Bendl, Jaroslav; Stourac, Jan; Salanda, Ondrej; Pavelka, Antonin; Wieben, Eric D.; Zendulka, Jaroslav; Brezovsky, Jan; Damborsky, Jiri

    2014-01-01

    Single nucleotide variants represent a prevalent form of genetic variation. Mutations in the coding regions are frequently associated with the development of various genetic diseases. Computational tools for the prediction of the effects of mutations on protein function are very important for analysis of single nucleotide variants and their prioritization for experimental characterization. Many computational tools are already widely employed for this purpose. Unfortunately, their comparison and further improvement is hindered by large overlaps between the training datasets and benchmark datasets, which lead to biased and overly optimistic reported performances. In this study, we have constructed three independent datasets by removing all duplicities, inconsistencies and mutations previously used in the training of evaluated tools. The benchmark dataset containing over 43,000 mutations was employed for the unbiased evaluation of eight established prediction tools: MAPP, nsSNPAnalyzer, PANTHER, PhD-SNP, PolyPhen-1, PolyPhen-2, SIFT and SNAP. The six best performing tools were combined into a consensus classifier PredictSNP, resulting into significantly improved prediction performance, and at the same time returned results for all mutations, confirming that consensus prediction represents an accurate and robust alternative to the predictions delivered by individual tools. A user-friendly web interface enables easy access to all eight prediction tools, the consensus classifier PredictSNP and annotations from the Protein Mutant Database and the UniProt database. The web server and the datasets are freely available to the academic community at http://loschmidt.chemi.muni.cz/predictsnp. PMID:24453961

  6. Accurate contact predictions using covariation techniques and machine learning

    PubMed Central

    Kosciolek, Tomasz

    2015-01-01

    ABSTRACT Here we present the results of residue–residue contact predictions achieved in CASP11 by the CONSIP2 server, which is based around our MetaPSICOV contact prediction method. On a set of 40 target domains with a median family size of around 40 effective sequences, our server achieved an average top‐L/5 long‐range contact precision of 27%. MetaPSICOV method bases on a combination of classical contact prediction features, enhanced with three distinct covariation methods embedded in a two‐stage neural network predictor. Some unique features of our approach are (1) the tuning between the classical and covariation features depending on the depth of the input alignment and (2) a hybrid approach to generate deepest possible multiple‐sequence alignments by combining jackHMMer and HHblits. We discuss the CONSIP2 pipeline, our results and show that where the method underperformed, the major factor was relying on a fixed set of parameters for the initial sequence alignments and not attempting to perform domain splitting as a preprocessing step. Proteins 2016; 84(Suppl 1):145–151. © 2015 The Authors. Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc. PMID:26205532

  7. How Accurately Can We Predict Eclipses for Algol? (Poster abstract)

    NASA Astrophysics Data System (ADS)

    Turner, D.

    2016-06-01

    (Abstract only) beta Persei, or Algol, is a very well known eclipsing binary system consisting of a late B-type dwarf that is regularly eclipsed by a GK subgiant every 2.867 days. Eclipses, which last about 8 hours, are regular enough that predictions for times of minima are published in various places, Sky & Telescope magazine and The Observer's Handbook, for example. But eclipse minimum lasts for less than a half hour, whereas subtle mistakes in the current ephemeris for the star can result in predictions that are off by a few hours or more. The Algol system is fairly complex, with the Algol A and Algol B eclipsing system also orbited by Algol C with an orbital period of nearly 2 years. Added to that are complex long-term O-C variations with a periodicity of almost two centuries that, although suggested by Hoffmeister to be spurious, fit the type of light travel time variations expected for a fourth star also belonging to the system. The AB sub-system also undergoes mass transfer events that add complexities to its O-C behavior. Is it actually possible to predict precise times of eclipse minima for Algol months in advance given such complications, or is it better to encourage ongoing observations of the star so that O-C variations can be tracked in real time?

  8. ILT based defect simulation of inspection images accurately predicts mask defect printability on wafer

    NASA Astrophysics Data System (ADS)

    Deep, Prakash; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2016-05-01

    At advanced technology nodes mask complexity has been increased because of large-scale use of resolution enhancement technologies (RET) which includes Optical Proximity Correction (OPC), Inverse Lithography Technology (ILT) and Source Mask Optimization (SMO). The number of defects detected during inspection of such mask increased drastically and differentiation of critical and non-critical defects are more challenging, complex and time consuming. Because of significant defectivity of EUVL masks and non-availability of actinic inspection, it is important and also challenging to predict the criticality of defects for printability on wafer. This is one of the significant barriers for the adoption of EUVL for semiconductor manufacturing. Techniques to decide criticality of defects from images captured using non actinic inspection images is desired till actinic inspection is not available. High resolution inspection of photomask images detects many defects which are used for process and mask qualification. Repairing all defects is not practical and probably not required, however it's imperative to know which defects are severe enough to impact wafer before repair. Additionally, wafer printability check is always desired after repairing a defect. AIMSTM review is the industry standard for this, however doing AIMSTM review for all defects is expensive and very time consuming. Fast, accurate and an economical mechanism is desired which can predict defect printability on wafer accurately and quickly from images captured using high resolution inspection machine. Predicting defect printability from such images is challenging due to the fact that the high resolution images do not correlate with actual mask contours. The challenge is increased due to use of different optical condition during inspection other than actual scanner condition, and defects found in such images do not have correlation with actual impact on wafer. Our automated defect simulation tool predicts

  9. Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting

    PubMed Central

    Khan, Tarik A.; Friedensohn, Simon; de Vries, Arthur R. Gorter; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T.

    2016-01-01

    High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion—the intraclonal diversity index—which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518

  10. Accurate predictions for the production of vaporized water

    SciTech Connect

    Morin, E.; Montel, F.

    1995-12-31

    The production of water vaporized in the gas phase is controlled by the local conditions around the wellbore. The pressure gradient applied to the formation creates a sharp increase of the molar water content in the hydrocarbon phase approaching the well; this leads to a drop in the pore water saturation around the wellbore. The extent of the dehydrated zone which is formed is the key controlling the bottom-hole content of vaporized water. The maximum water content in the hydrocarbon phase at a given pressure, temperature and salinity is corrected by capillarity or adsorption phenomena depending on the actual water saturation. Describing the mass transfer of the water between the hydrocarbon phases and the aqueous phase into the tubing gives a clear idea of vaporization effects on the formation of scales. Field example are presented for gas fields with temperatures ranging between 140{degrees}C and 180{degrees}C, where water vaporization effects are significant. Conditions for salt plugging in the tubing are predicted.

  11. Change in BMI accurately predicted by social exposure to acquaintances.

    PubMed

    Oloritun, Rahman O; Ouarda, Taha B M J; Moturu, Sai; Madan, Anmol; Pentland, Alex Sandy; Khayal, Inas

    2013-01-01

    Research has mostly focused on obesity and not on processes of BMI change more generally, although these may be key factors that lead to obesity. Studies have suggested that obesity is affected by social ties. However these studies used survey based data collection techniques that may be biased toward select only close friends and relatives. In this study, mobile phone sensing techniques were used to routinely capture social interaction data in an undergraduate dorm. By automating the capture of social interaction data, the limitations of self-reported social exposure data are avoided. This study attempts to understand and develop a model that best describes the change in BMI using social interaction data. We evaluated a cohort of 42 college students in a co-located university dorm, automatically captured via mobile phones and survey based health-related information. We determined the most predictive variables for change in BMI using the least absolute shrinkage and selection operator (LASSO) method. The selected variables, with gender, healthy diet category, and ability to manage stress, were used to build multiple linear regression models that estimate the effect of exposure and individual factors on change in BMI. We identified the best model using Akaike Information Criterion (AIC) and R(2). This study found a model that explains 68% (p<0.0001) of the variation in change in BMI. The model combined social interaction data, especially from acquaintances, and personal health-related information to explain change in BMI. This is the first study taking into account both interactions with different levels of social interaction and personal health-related information. Social interactions with acquaintances accounted for more than half the variation in change in BMI. This suggests the importance of not only individual health information but also the significance of social interactions with people we are exposed to, even people we may not consider as close friends.

  12. Toward an Accurate Prediction of the Arrival Time of Geomagnetic-Effective Coronal Mass Ejections

    NASA Astrophysics Data System (ADS)

    Shi, T.; Wang, Y.; Wan, L.; Cheng, X.; Ding, M.; Zhang, J.

    2015-12-01

    Accurately predicting the arrival of coronal mass ejections (CMEs) to the Earth based on remote images is of critical significance for the study of space weather. Here we make a statistical study of 21 Earth-directed CMEs, specifically exploring the relationship between CME initial speeds and transit times. The initial speed of a CME is obtained by fitting the CME with the Graduated Cylindrical Shell model and is thus free of projection effects. We then use the drag force model to fit results of the transit time versus the initial speed. By adopting different drag regimes, i.e., the viscous, aerodynamics, and hybrid regimes, we get similar results, with a least mean estimation error of the hybrid model of 12.9 hr. CMEs with a propagation angle (the angle between the propagation direction and the Sun-Earth line) larger than their half-angular widths arrive at the Earth with an angular deviation caused by factors other than the radial solar wind drag. The drag force model cannot be reliably applied to such events. If we exclude these events in the sample, the prediction accuracy can be improved, i.e., the estimation error reduces to 6.8 hr. This work suggests that it is viable to predict the arrival time of CMEs to the Earth based on the initial parameters with fairly good accuracy. Thus, it provides a method of forecasting space weather 1-5 days following the occurrence of CMEs.

  13. A mathematical recursive model for accurate description of the phase behavior in the near-critical region by Generalized van der Waals Equation

    NASA Astrophysics Data System (ADS)

    Kim, Jibeom; Jeon, Joonhyeon

    2015-01-01

    Recently, related studies on Equation Of State (EOS) have reported that generalized van der Waals (GvdW) shows poor representations in the near critical region for non-polar and non-sphere molecules. Hence, there are still remains a problem of GvdW parameters to minimize loss in describing saturated vapor densities and vice versa. This paper describes a recursive model GvdW (rGvdW) for an accurate representation of pure fluid materials in the near critical region. For the performance evaluation of rGvdW in the near critical region, other EOS models are also applied together with two pure molecule group: alkane and amine. The comparison results show rGvdW provides much more accurate and reliable predictions of pressure than the others. The calculating model of EOS through this approach gives an additional insight into the physical significance of accurate prediction of pressure in the nearcritical region.

  14. Predicting speech intelligibility in noise for hearing-critical jobs

    NASA Astrophysics Data System (ADS)

    Soli, Sigfrid D.; Laroche, Chantal; Giguere, Christian

    2003-10-01

    Many jobs require auditory abilities such as speech communication, sound localization, and sound detection. An employee for whom these abilities are impaired may constitute a safety risk for himself or herself, for fellow workers, and possibly for the general public. A number of methods have been used to predict these abilities from diagnostic measures of hearing (e.g., the pure-tone audiogram); however, these methods have not proved to be sufficiently accurate for predicting performance in the noise environments where hearing-critical jobs are performed. We have taken an alternative and potentially more accurate approach. A direct measure of speech intelligibility in noise, the Hearing in Noise Test (HINT), is instead used to screen individuals. The screening criteria are validated by establishing the empirical relationship between the HINT score and the auditory abilities of the individual, as measured in laboratory recreations of real-world workplace noise environments. The psychometric properties of the HINT enable screening of individuals with an acceptable amount of error. In this presentation, we will describe the predictive model and report the results of field measurements and laboratory studies used to provide empirical validation of the model. [Work supported by Fisheries and Oceans Canada.

  15. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics

    PubMed Central

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research. PMID:27571061

  16. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics.

    PubMed

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research. PMID:27571061

  17. Highly Accurate Prediction of Protein-Protein Interactions via Incorporating Evolutionary Information and Physicochemical Characteristics.

    PubMed

    Li, Zheng-Wei; You, Zhu-Hong; Chen, Xing; Gui, Jie; Nie, Ru

    2016-01-01

    Protein-protein interactions (PPIs) occur at almost all levels of cell functions and play crucial roles in various cellular processes. Thus, identification of PPIs is critical for deciphering the molecular mechanisms and further providing insight into biological processes. Although a variety of high-throughput experimental techniques have been developed to identify PPIs, existing PPI pairs by experimental approaches only cover a small fraction of the whole PPI networks, and further, those approaches hold inherent disadvantages, such as being time-consuming, expensive, and having high false positive rate. Therefore, it is urgent and imperative to develop automatic in silico approaches to predict PPIs efficiently and accurately. In this article, we propose a novel mixture of physicochemical and evolutionary-based feature extraction method for predicting PPIs using our newly developed discriminative vector machine (DVM) classifier. The improvements of the proposed method mainly consist in introducing an effective feature extraction method that can capture discriminative features from the evolutionary-based information and physicochemical characteristics, and then a powerful and robust DVM classifier is employed. To the best of our knowledge, it is the first time that DVM model is applied to the field of bioinformatics. When applying the proposed method to the Yeast and Helicobacter pylori (H. pylori) datasets, we obtain excellent prediction accuracies of 94.35% and 90.61%, respectively. The computational results indicate that our method is effective and robust for predicting PPIs, and can be taken as a useful supplementary tool to the traditional experimental methods for future proteomics research.

  18. Critical conceptualism in environmental modeling and prediction.

    PubMed

    Christakos, G

    2003-10-15

    Many important problems in environmental science and engineering are of a conceptual nature. Research and development, however, often becomes so preoccupied with technical issues, which are themselves fascinating, that it neglects essential methodological elements of conceptual reasoning and theoretical inquiry. This work suggests that valuable insight into environmental modeling can be gained by means of critical conceptualism which focuses on the software of human reason and, in practical terms, leads to a powerful methodological framework of space-time modeling and prediction. A knowledge synthesis system develops the rational means for the epistemic integration of various physical knowledge bases relevant to the natural system of interest in order to obtain a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, generate meaningful predictions of environmental processes in space-time, and produce science-based decisions. No restriction is imposed on the shape of the distribution model or the form of the predictor (non-Gaussian distributions, multiple-point statistics, and nonlinear models are automatically incorporated). The scientific reasoning structure underlying knowledge synthesis involves teleologic criteria and stochastic logic principles which have important advantages over the reasoning method of conventional space-time techniques. Insight is gained in terms of real world applications, including the following: the study of global ozone patterns in the atmosphere using data sets generated by instruments on board the Nimbus 7 satellite and secondary information in terms of total ozone-tropopause pressure models; the mapping of arsenic concentrations in the Bangladesh drinking water by assimilating hard and soft data from an extensive network of monitoring wells; and the dynamic imaging of probability distributions of pollutants across the Kalamazoo river. PMID:14594379

  19. Accurate and precise determination of critical properties from Gibbs ensemble Monte Carlo simulations

    SciTech Connect

    Dinpajooh, Mohammadhasan; Bai, Peng; Allan, Douglas A.; Siepmann, J. Ilja

    2015-09-21

    Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T{sub c} = 1.3128 ± 0.0016, ρ{sub c} = 0.316 ± 0.004, and p{sub c} = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ{sub t} ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r{sub cut} = 3.5σ yield T{sub c} and p{sub c} that are higher by 0.2% and 1.4% than simulations with r{sub cut} = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r{sub cut} = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard

  20. Accurate and precise determination of critical properties from Gibbs ensemble Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Dinpajooh, Mohammadhasan; Bai, Peng; Allan, Douglas A.; Siepmann, J. Ilja

    2015-09-01

    Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor-liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields Tc = 1.3128 ± 0.0016, ρc = 0.316 ± 0.004, and pc = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρt ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using rcut = 3.5σ yield Tc and pc that are higher by 0.2% and 1.4% than simulations with rcut = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that rcut = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard-core square-well particles with various ranges of the

  1. Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques

    PubMed Central

    Petersen, Richard C.

    2014-01-01

    Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms

  2. Can radiation therapy treatment planning system accurately predict surface doses in postmastectomy radiation therapy patients?

    SciTech Connect

    Wong, Sharon; Back, Michael; Tan, Poh Wee; Lee, Khai Mun; Baggarley, Shaun; Lu, Jaide Jay

    2012-07-01

    Skin doses have been an important factor in the dose prescription for breast radiotherapy. Recent advances in radiotherapy treatment techniques, such as intensity-modulated radiation therapy (IMRT) and new treatment schemes such as hypofractionated breast therapy have made the precise determination of the surface dose necessary. Detailed information of the dose at various depths of the skin is also critical in designing new treatment strategies. The purpose of this work was to assess the accuracy of surface dose calculation by a clinically used treatment planning system and those measured by thermoluminescence dosimeters (TLDs) in a customized chest wall phantom. This study involved the construction of a chest wall phantom for skin dose assessment. Seven TLDs were distributed throughout each right chest wall phantom to give adequate representation of measured radiation doses. Point doses from the CMS Xio Registered-Sign treatment planning system (TPS) were calculated for each relevant TLD positions and results correlated. There were no significant difference between measured absorbed dose by TLD and calculated doses by the TPS (p > 0.05 (1-tailed). Dose accuracy of up to 2.21% was found. The deviations from the calculated absorbed doses were overall larger (3.4%) when wedges and bolus were used. 3D radiotherapy TPS is a useful and accurate tool to assess the accuracy of surface dose. Our studies have shown that radiation treatment accuracy expressed as a comparison between calculated doses (by TPS) and measured doses (by TLD dosimetry) can be accurately predicted for tangential treatment of the chest wall after mastectomy.

  3. Accurate prediction of band gaps and optical properties of HfO2

    NASA Astrophysics Data System (ADS)

    Ondračka, Pavel; Holec, David; Nečas, David; Zajíčková, Lenka

    2016-10-01

    We report on optical properties of various polymorphs of hafnia predicted within the framework of density functional theory. The full potential linearised augmented plane wave method was employed together with the Tran-Blaha modified Becke-Johnson potential (TB-mBJ) for exchange and local density approximation for correlation. Unit cells of monoclinic, cubic and tetragonal crystalline, and a simulated annealing-based model of amorphous hafnia were fully relaxed with respect to internal positions and lattice parameters. Electronic structures and band gaps for monoclinic, cubic, tetragonal and amorphous hafnia were calculated using three different TB-mBJ parametrisations and the results were critically compared with the available experimental and theoretical reports. Conceptual differences between a straightforward comparison of experimental measurements to a calculated band gap on the one hand and to a whole electronic structure (density of electronic states) on the other hand, were pointed out, suggesting the latter should be used whenever possible. Finally, dielectric functions were calculated at two levels, using the random phase approximation without local field effects and with a more accurate Bethe-Salpether equation (BSE) to account for excitonic effects. We conclude that a satisfactory agreement with experimental data for HfO2 was obtained only in the latter case.

  4. Prediction of critical grout parameters: critical flow rate

    SciTech Connect

    Tallent, O.K.; McDaniel, E.W.; Godsey, T.T.; Dodson, K.E.

    1986-01-01

    Waste disposal is rapidly becoming one of the most important technological endeavors of our time and fixation of waste in cement-based materials is an important part of the endeavor. Investigations of given wastes are usually individually conducted and reported. In this study, data obtained from investigation of critical flow rates for three distinctly different wastes are correlated with apparent viscosity data via a single empirical equation. Critical flow rate, which is an important variable in waste grout work, is defined as the flow rate at which a grout must be pumped through a reference pipe to obtain turbulent flow. It is important that the grout flow be turbulent since laminar flow allows caking on pipe walls and causes eventual plugging. The three wastes used in this study can be characterized as containing: (1) high nitrate, carbonate, and sulfate; (2) high phosphate; and (3) high fluoride, ammonium, and suspended solids waste. The measurements of apparent viscosity (grouts are non-Newtonian fluids) and other measurements to obtain data to calculate the critical flow rates were made using a Fann-Direct Reading Viscometer, Model 35A.

  5. Extracting accurate strain measurements in bone mechanics: A critical review of current methods.

    PubMed

    Grassi, Lorenzo; Isaksson, Hanna

    2015-10-01

    Osteoporosis related fractures are a social burden that advocates for more accurate fracture prediction methods. Mechanistic methods, e.g. finite element models, have been proposed as a tool to better predict bone mechanical behaviour and strength. However, there is little consensus about the optimal constitutive law to describe bone as a material. Extracting reliable and relevant strain data from experimental tests is of fundamental importance to better understand bone mechanical properties, and to validate numerical models. Several techniques have been used to measure strain in experimental mechanics, with substantial differences in terms of accuracy, precision, time- and length-scale. Each technique presents upsides and downsides that must be carefully evaluated when designing the experiment. Moreover, additional complexities are often encountered when applying such strain measurement techniques to bone, due to its complex composite structure. This review of literature examined the four most commonly adopted methods for strain measurements (strain gauges, fibre Bragg grating sensors, digital image correlation, and digital volume correlation), with a focus on studies with bone as a substrate material, at the organ and tissue level. For each of them the working principles, a summary of the main applications to bone mechanics at the organ- and tissue-level, and a list of pros and cons are provided. PMID:26099201

  6. Mind-set and close relationships: when bias leads to (In)accurate predictions.

    PubMed

    Gagné, F M; Lydon, J E

    2001-07-01

    The authors investigated whether mind-set influences the accuracy of relationship predictions. Because people are more biased in their information processing when thinking about implementing an important goal, relationship predictions made in an implemental mind-set were expected to be less accurate than those made in a more impartial deliberative mind-set. In Study 1, open-ended thoughts of students about to leave for university were coded for mind-set. In Study 2, mind-set about a major life goal was assessed using a self-report measure. In Study 3, mind-set was experimentally manipulated. Overall, mind-set interacted with forecasts to predict relationship survival. Forecasts were more accurate in a deliberative mind-set than in an implemental mind-set. This effect was more pronounced for long-term than for short-term relationship survival. Finally, deliberatives were not pessimistic; implementals were unduly optimistic.

  7. Modeling methodology for the accurate and prompt prediction of symptomatic events in chronic diseases.

    PubMed

    Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L

    2016-08-01

    Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. PMID:27260782

  8. SIFTER search: a web server for accurate phylogeny-based protein function prediction.

    PubMed

    Sahraeian, Sayed M; Luo, Kevin R; Brenner, Steven E

    2015-07-01

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. The SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.

  9. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses: Criticality (keff) Predictions

    DOE PAGES

    Scaglione, John M.; Mueller, Don E.; Wagner, John C.

    2014-12-01

    One of the most important remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation—in particular, the availability and use of applicable measured data to support validation, especially for fission products (FPs). Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. In this study, this paper describes a validation approach for commercial spent nuclear fuel (SNF) criticality safety (keff) evaluations based on best-available data and methodsmore » and applies the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The criticality validation approach utilizes not only available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion program to support validation of the principal actinides but also calculated sensitivities, nuclear data uncertainties, and limited available FP LCE data to predict and verify individual biases for relevant minor actinides and FPs. The results demonstrate that (a) sufficient critical experiment data exist to adequately validate keff calculations via conventional validation approaches for the primary actinides, (b) sensitivity-based critical experiment selection is more appropriate for generating accurate application model bias and uncertainty, and (c) calculated sensitivities and nuclear data uncertainties can be used for generating conservative estimates of bias for minor actinides and FPs. Results based on the SCALE 6.1 and the ENDF/B-VII.0 cross-section libraries indicate that a conservative estimate of the bias for the minor actinides and FPs is 1.5% of their worth within the

  10. A Single Linear Prediction Filter that Accurately Predicts the AL Index

    NASA Astrophysics Data System (ADS)

    McPherron, R. L.; Chu, X.

    2015-12-01

    The AL index is a measure of the strength of the westward electrojet flowing along the auroral oval. It has two components: one from the global DP-2 current system and a second from the DP-1 current that is more localized near midnight. It is generally believed that the index a very poor measure of these currents because of its dependence on the distance of stations from the source of the two currents. In fact over season and solar cycle the coupling strength defined as the steady state ratio of the output AL to the input coupling function varies by a factor of four. There are four factors that lead to this variation. First is the equinoctial effect that modulates coupling strength with peaks (strongest coupling) at the equinoxes. Second is the saturation of the polar cap potential which decreases coupling strength as the strength of the driver increases. Since saturation occurs more frequently at solar maximum we obtain the result that maximum coupling strength occurs at equinox at solar minimum. A third factor is ionospheric conductivity with stronger coupling at summer solstice as compared to winter. The fourth factor is the definition of a solar wind coupling function appropriate to a given index. We have developed an optimum coupling function depending on solar wind speed, density, transverse magnetic field, and IMF clock angle which is better than previous functions. Using this we have determined the seasonal variation of coupling strength and developed an inverse function that modulates the optimum coupling function so that all seasonal variation is removed. In a similar manner we have determined the dependence of coupling strength on solar wind driver strength. The inverse of this function is used to scale a linear prediction filter thus eliminating the dependence on driver strength. Our result is a single linear filter that is adjusted in a nonlinear manner by driver strength and an optimum coupling function that is seasonal modulated. Together this

  11. A review of the kinetic detail required for accurate predictions of normal shock waves

    NASA Technical Reports Server (NTRS)

    Muntz, E. P.; Erwin, Daniel A.; Pham-Van-diep, Gerald C.

    1991-01-01

    Several aspects of the kinetic models used in the collision phase of Monte Carlo direct simulations have been studied. Accurate molecular velocity distribution function predictions require a significantly increased number of computational cells in one maximum slope shock thickness, compared to predictions of macroscopic properties. The shape of the highly repulsive portion of the interatomic potential for argon is not well modeled by conventional interatomic potentials; this portion of the potential controls high Mach number shock thickness predictions, indicating that the specification of the energetic repulsive portion of interatomic or intermolecular potentials must be chosen with care for correct modeling of nonequilibrium flows at high temperatures. It has been shown for inverse power potentials that the assumption of variable hard sphere scattering provides accurate predictions of the macroscopic properties in shock waves, by comparison with simulations in which differential scattering is employed in the collision phase. On the other hand, velocity distribution functions are not well predicted by the variable hard sphere scattering model for softer potentials at higher Mach numbers.

  12. Can phenological models predict tree phenology accurately under climate change conditions?

    NASA Astrophysics Data System (ADS)

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2014-05-01

    The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay

  13. Can phenological models predict tree phenology accurately in the future? The unrevealed hurdle of endodormancy break.

    PubMed

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2016-10-01

    The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future. PMID:27272707

  14. Can phenological models predict tree phenology accurately in the future? The unrevealed hurdle of endodormancy break.

    PubMed

    Chuine, Isabelle; Bonhomme, Marc; Legave, Jean-Michel; García de Cortázar-Atauri, Iñaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry

    2016-10-01

    The onset of the growing season of trees has been earlier by 2.3 days per decade during the last 40 years in temperate Europe because of global warming. The effect of temperature on plant phenology is, however, not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud endodormancy, and, on the other hand, higher temperatures are necessary to promote bud cell growth afterward. Different process-based models have been developed in the last decades to predict the date of budbreak of woody species. They predict that global warming should delay or compromise endodormancy break at the species equatorward range limits leading to a delay or even impossibility to flower or set new leaves. These models are classically parameterized with flowering or budbreak dates only, with no information on the endodormancy break date because this information is very scarce. Here, we evaluated the efficiency of a set of phenological models to accurately predict the endodormancy break dates of three fruit trees. Our results show that models calibrated solely with budbreak dates usually do not accurately predict the endodormancy break date. Providing endodormancy break date for the model parameterization results in much more accurate prediction of this latter, with, however, a higher error than that on budbreak dates. Most importantly, we show that models not calibrated with endodormancy break dates can generate large discrepancies in forecasted budbreak dates when using climate scenarios as compared to models calibrated with endodormancy break dates. This discrepancy increases with mean annual temperature and is therefore the strongest after 2050 in the southernmost regions. Our results claim for the urgent need of massive measurements of endodormancy break dates in forest and fruit trees to yield more robust projections of phenological changes in a near future.

  15. Accurate similarity index based on activity and connectivity of node for link prediction

    NASA Astrophysics Data System (ADS)

    Li, Longjie; Qian, Lvjian; Wang, Xiaoping; Luo, Shishun; Chen, Xiaoyun

    2015-05-01

    Recent years have witnessed the increasing of available network data; however, much of those data is incomplete. Link prediction, which can find the missing links of a network, plays an important role in the research and analysis of complex networks. Based on the assumption that two unconnected nodes which are highly similar are very likely to have an interaction, most of the existing algorithms solve the link prediction problem by computing nodes' similarities. The fundamental requirement of those algorithms is accurate and effective similarity indices. In this paper, we propose a new similarity index, namely similarity based on activity and connectivity (SAC), which performs link prediction more accurately. To compute the similarity between two nodes, this index employs the average activity of these two nodes in their common neighborhood and the connectivities between them and their common neighbors. The higher the average activity is and the stronger the connectivities are, the more similar the two nodes are. The proposed index not only commendably distinguishes the contributions of paths but also incorporates the influence of endpoints. Therefore, it can achieve a better predicting result. To verify the performance of SAC, we conduct experiments on 10 real-world networks. Experimental results demonstrate that SAC outperforms the compared baselines.

  16. Accurate prediction of the linear viscoelastic properties of highly entangled mono and bidisperse polymer melts.

    PubMed

    Stephanou, Pavlos S; Mavrantzas, Vlasis G

    2014-06-01

    We present a hierarchical computational methodology which permits the accurate prediction of the linear viscoelastic properties of entangled polymer melts directly from the chemical structure, chemical composition, and molecular architecture of the constituent chains. The method entails three steps: execution of long molecular dynamics simulations with moderately entangled polymer melts, self-consistent mapping of the accumulated trajectories onto a tube model and parameterization or fine-tuning of the model on the basis of detailed simulation data, and use of the modified tube model to predict the linear viscoelastic properties of significantly higher molecular weight (MW) melts of the same polymer. Predictions are reported for the zero-shear-rate viscosity η0 and the spectra of storage G'(ω) and loss G″(ω) moduli for several mono and bidisperse cis- and trans-1,4 polybutadiene melts as well as for their MW dependence, and are found to be in remarkable agreement with experimentally measured rheological data. PMID:24908037

  17. Accurate prediction of the linear viscoelastic properties of highly entangled mono and bidisperse polymer melts

    NASA Astrophysics Data System (ADS)

    Stephanou, Pavlos S.; Mavrantzas, Vlasis G.

    2014-06-01

    We present a hierarchical computational methodology which permits the accurate prediction of the linear viscoelastic properties of entangled polymer melts directly from the chemical structure, chemical composition, and molecular architecture of the constituent chains. The method entails three steps: execution of long molecular dynamics simulations with moderately entangled polymer melts, self-consistent mapping of the accumulated trajectories onto a tube model and parameterization or fine-tuning of the model on the basis of detailed simulation data, and use of the modified tube model to predict the linear viscoelastic properties of significantly higher molecular weight (MW) melts of the same polymer. Predictions are reported for the zero-shear-rate viscosity η0 and the spectra of storage G'(ω) and loss G″(ω) moduli for several mono and bidisperse cis- and trans-1,4 polybutadiene melts as well as for their MW dependence, and are found to be in remarkable agreement with experimentally measured rheological data.

  18. Prediction of Accurate Thermochemistry of Medium and Large Sized Radicals Using Connectivity-Based Hierarchy (CBH).

    PubMed

    Sengupta, Arkajyoti; Raghavachari, Krishnan

    2014-10-14

    Accurate modeling of the chemical reactions in many diverse areas such as combustion, photochemistry, or atmospheric chemistry strongly depends on the availability of thermochemical information of the radicals involved. However, accurate thermochemical investigations of radical systems using state of the art composite methods have mostly been restricted to the study of hydrocarbon radicals of modest size. In an alternative approach, systematic error-canceling thermochemical hierarchy of reaction schemes can be applied to yield accurate results for such systems. In this work, we have extended our connectivity-based hierarchy (CBH) method to the investigation of radical systems. We have calibrated our method using a test set of 30 medium sized radicals to evaluate their heats of formation. The CBH-rad30 test set contains radicals containing diverse functional groups as well as cyclic systems. We demonstrate that the sophisticated error-canceling isoatomic scheme (CBH-2) with modest levels of theory is adequate to provide heats of formation accurate to ∼1.5 kcal/mol. Finally, we predict heats of formation of 19 other large and medium sized radicals for which the accuracy of available heats of formation are less well-known. PMID:26588131

  19. A Novel Method for Accurate Operon Predictions in All SequencedProkaryotes

    SciTech Connect

    Price, Morgan N.; Huang, Katherine H.; Alm, Eric J.; Arkin, Adam P.

    2004-12-01

    We combine comparative genomic measures and the distance separating adjacent genes to predict operons in 124 completely sequenced prokaryotic genomes. Our method automatically tailors itself to each genome using sequence information alone, and thus can be applied to any prokaryote. For Escherichia coli K12 and Bacillus subtilis, our method is 85 and 83% accurate, respectively, which is similar to the accuracy of methods that use the same features but are trained on experimentally characterized transcripts. In Halobacterium NRC-1 and in Helicobacterpylori, our method correctly infers that genes in operons are separated by shorter distances than they are in E.coli, and its predictions using distance alone are more accurate than distance-only predictions trained on a database of E.coli transcripts. We use microarray data from sixphylogenetically diverse prokaryotes to show that combining intergenic distance with comparative genomic measures further improves accuracy and that our method is broadly effective. Finally, we survey operon structure across 124 genomes, and find several surprises: H.pylori has many operons, contrary to previous reports; Bacillus anthracis has an unusual number of pseudogenes within conserved operons; and Synechocystis PCC6803 has many operons even though it has unusually wide spacings between conserved adjacent genes.

  20. Development and Validation of a Multidisciplinary Tool for Accurate and Efficient Rotorcraft Noise Prediction (MUTE)

    NASA Technical Reports Server (NTRS)

    Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris

    2011-01-01

    A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.

  1. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    DOE PAGES

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    2015-05-15

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access tomore » precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.« less

  2. Accurate Prediction of Severe Allergic Reactions by a Small Set of Environmental Parameters (NDVI, Temperature)

    PubMed Central

    Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias

    2015-01-01

    Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions. PMID:25794106

  3. SIFTER search: a web server for accurate phylogeny-based protein function prediction.

    PubMed

    Sahraeian, Sayed M; Luo, Kevin R; Brenner, Steven E

    2015-07-01

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. The SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded. PMID:25979264

  4. Microstructure-Dependent Gas Adsorption: Accurate Predictions of Methane Uptake in Nanoporous Carbons

    SciTech Connect

    Ihm, Yungok; Cooper, Valentino R; Gallego, Nidia C; Contescu, Cristian I; Morris, James R

    2014-01-01

    We demonstrate a successful, efficient framework for predicting gas adsorption properties in real materials based on first-principles calculations, with a specific comparison of experiment and theory for methane adsorption in activated carbons. These carbon materials have different pore size distributions, leading to a variety of uptake characteristics. Utilizing these distributions, we accurately predict experimental uptakes and heats of adsorption without empirical potentials or lengthy simulations. We demonstrate that materials with smaller pores have higher heats of adsorption, leading to a higher gas density in these pores. This pore-size dependence must be accounted for, in order to predict and understand the adsorption behavior. The theoretical approach combines: (1) ab initio calculations with a van der Waals density functional to determine adsorbent-adsorbate interactions, and (2) a thermodynamic method that predicts equilibrium adsorption densities by directly incorporating the calculated potential energy surface in a slit pore model. The predicted uptake at P=20 bar and T=298 K is in excellent agreement for all five activated carbon materials used. This approach uses only the pore-size distribution as an input, with no fitting parameters or empirical adsorbent-adsorbate interactions, and thus can be easily applied to other adsorbent-adsorbate combinations.

  5. SIFTER search: a web server for accurate phylogeny-based protein function prediction

    SciTech Connect

    Sahraeian, Sayed M.; Luo, Kevin R.; Brenner, Steven E.

    2015-05-15

    We are awash in proteins discovered through high-throughput sequencing projects. As only a minuscule fraction of these have been experimentally characterized, computational methods are widely used for automated annotation. Here, we introduce a user-friendly web interface for accurate protein function prediction using the SIFTER algorithm. SIFTER is a state-of-the-art sequence-based gene molecular function prediction algorithm that uses a statistical model of function evolution to incorporate annotations throughout the phylogenetic tree. Due to the resources needed by the SIFTER algorithm, running SIFTER locally is not trivial for most users, especially for large-scale problems. The SIFTER web server thus provides access to precomputed predictions on 16 863 537 proteins from 232 403 species. Users can explore SIFTER predictions with queries for proteins, species, functions, and homologs of sequences not in the precomputed prediction set. Lastly, the SIFTER web server is accessible at http://sifter.berkeley.edu/ and the source code can be downloaded.

  6. Change in heat capacity accurately predicts vibrational coupling in enzyme catalyzed reactions.

    PubMed

    Arcus, Vickery L; Pudney, Christopher R

    2015-08-01

    The temperature dependence of kinetic isotope effects (KIEs) have been used to infer the vibrational coupling of the protein and or substrate to the reaction coordinate, particularly in enzyme-catalyzed hydrogen transfer reactions. We find that a new model for the temperature dependence of experimentally determined observed rate constants (macromolecular rate theory, MMRT) is able to accurately predict the occurrence of vibrational coupling, even where the temperature dependence of the KIE fails. This model, that incorporates the change in heat capacity for enzyme catalysis, demonstrates remarkable consistency with both experiment and theory and in many respects is more robust than models used at present.

  7. Accurate verification of the conserved-vector-current and standard-model predictions

    SciTech Connect

    Sirlin, A.; Zucchini, R.

    1986-10-20

    An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.

  8. Towards more accurate wind and solar power prediction by improving NWP model physics

    NASA Astrophysics Data System (ADS)

    Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo

    2014-05-01

    The growing importance and successive expansion of renewable energies raise new challenges for decision makers, economists, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the errors and provide an a priori estimate of remaining uncertainties associated with the large share of weather-dependent power sources. For this purpose it is essential to optimize NWP model forecasts with respect to those prognostic variables which are relevant for wind and solar power plants. An improved weather forecast serves as the basis for a sophisticated power forecasts. Consequently, a well-timed energy trading on the stock market, and electrical grid stability can be maintained. The German Weather Service (DWD) currently is involved with two projects concerning research in the field of renewable energy, namely ORKA*) and EWeLiNE**). Whereas the latter is in collaboration with the Fraunhofer Institute (IWES), the project ORKA is led by energy & meteo systems (emsys). Both cooperate with German transmission system operators. The goal of the projects is to improve wind and photovoltaic (PV) power forecasts by combining optimized NWP and enhanced power forecast models. In this context, the German Weather Service aims to improve its model system, including the ensemble forecasting system, by working on data assimilation, model physics and statistical post processing. This presentation is focused on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. First steps leading to improved physical parameterization schemes within the NWP-model are presented. Wind mast measurements reaching up to 200 m height above ground are used for the estimation of the (NWP) wind forecast error at heights relevant for wind energy plants. One particular problem is the daily cycle in wind speed. The transition from stable stratification during

  9. Predicting mixture phase equilibria and critical behavior using the SAFT-VRX approach.

    PubMed

    Sun, Lixin; Zhao, Honggang; Kiselev, Sergei B; McCabe, Clare

    2005-05-12

    The SAFT-VRX equation of state combines the SAFT-VR equation with a crossover function that smoothly transforms the classical equation into a nonanalytical form close to the critical point. By a combinination of the accuracy of the SAFT-VR approach away from the critical region with the asymptotic scaling behavior seen at the critical point of real fluids, the SAFT-VRX equation can accurately describe the global fluid phase diagram. In previous work, we demonstrated that the SAFT-VRX equation very accurately describes the pvT and phase behavior of both nonassociating and associating pure fluids, with a minimum of fitting to experimental data. Here, we present a generalized SAFT-VRX equation of state for binary mixtures that is found to accurately predict the vapor-liquid equilibrium and pvT behavior of the systems studied. In particular, we examine binary mixtures of n-alkanes and carbon dioxide + n-alkanes. The SAFT-VRX equation accurately describes not only the gas-liquid critical locus for these systems but also the vapor-liquid equilibrium phase diagrams and thermal properties in single-phase regions.

  10. Intermolecular potentials and the accurate prediction of the thermodynamic properties of water

    SciTech Connect

    Shvab, I.; Sadus, Richard J.

    2013-11-21

    The ability of intermolecular potentials to correctly predict the thermodynamic properties of liquid water at a density of 0.998 g/cm{sup 3} for a wide range of temperatures (298–650 K) and pressures (0.1–700 MPa) is investigated. Molecular dynamics simulations are reported for the pressure, thermal pressure coefficient, thermal expansion coefficient, isothermal and adiabatic compressibilities, isobaric and isochoric heat capacities, and Joule-Thomson coefficient of liquid water using the non-polarizable SPC/E and TIP4P/2005 potentials. The results are compared with both experiment data and results obtained from the ab initio-based Matsuoka-Clementi-Yoshimine non-additive (MCYna) [J. Li, Z. Zhou, and R. J. Sadus, J. Chem. Phys. 127, 154509 (2007)] potential, which includes polarization contributions. The data clearly indicate that both the SPC/E and TIP4P/2005 potentials are only in qualitative agreement with experiment, whereas the polarizable MCYna potential predicts some properties within experimental uncertainty. This highlights the importance of polarizability for the accurate prediction of the thermodynamic properties of water, particularly at temperatures beyond 298 K.

  11. Accurate Histological Techniques to Evaluate Critical Temperature Thresholds for Prostate In Vivo

    NASA Astrophysics Data System (ADS)

    Bronskill, Michael; Chopra, Rajiv; Boyes, Aaron; Tang, Kee; Sugar, Linda

    2007-05-01

    Various histological techniques have been compared to evaluate the boundaries of thermal damage produced by ultrasound in vivo in a canine model. When all images are accurately co-registered, H&E stained micrographs provide the best assessment of acute cellular damage. Estimates of the boundaries of 100% and 0% cell killing correspond to maximum temperature thresholds of 54.6 ± 1.7°C and 51.5 ± 1.9°C, respectively.

  12. Direct Pressure Monitoring Accurately Predicts Pulmonary Vein Occlusion During Cryoballoon Ablation

    PubMed Central

    Kosmidou, Ioanna; Wooden, Shannnon; Jones, Brian; Deering, Thomas; Wickliffe, Andrew; Dan, Dan

    2013-01-01

    Cryoballoon ablation (CBA) is an established therapy for atrial fibrillation (AF). Pulmonary vein (PV) occlusion is essential for achieving antral contact and PV isolation and is typically assessed by contrast injection. We present a novel method of direct pressure monitoring for assessment of PV occlusion. Transcatheter pressure is monitored during balloon advancement to the PV antrum. Pressure is recorded via a single pressure transducer connected to the inner lumen of the cryoballoon. Pressure curve characteristics are used to assess occlusion in conjunction with fluoroscopic or intracardiac echocardiography (ICE) guidance. PV occlusion is confirmed when loss of typical left atrial (LA) pressure waveform is observed with recordings of PA pressure characteristics (no A wave and rapid V wave upstroke). Complete pulmonary vein occlusion as assessed with this technique has been confirmed with concurrent contrast utilization during the initial testing of the technique and has been shown to be highly accurate and readily reproducible. We evaluated the efficacy of this novel technique in 35 patients. A total of 128 veins were assessed for occlusion with the cryoballoon utilizing the pressure monitoring technique; occlusive pressure was demonstrated in 113 veins with resultant successful pulmonary vein isolation in 111 veins (98.2%). Occlusion was confirmed with subsequent contrast injection during the initial ten procedures, after which contrast utilization was rapidly reduced or eliminated given the highly accurate identification of occlusive pressure waveform with limited initial training. Verification of PV occlusive pressure during CBA is a novel approach to assessing effective PV occlusion and it accurately predicts electrical isolation. Utilization of this method results in significant decrease in fluoroscopy time and volume of contrast. PMID:23485956

  13. A fast and accurate method to predict 2D and 3D aerodynamic boundary layer flows

    NASA Astrophysics Data System (ADS)

    Bijleveld, H. A.; Veldman, A. E. P.

    2014-12-01

    A quasi-simultaneous interaction method is applied to predict 2D and 3D aerodynamic flows. This method is suitable for offshore wind turbine design software as it is a very accurate and computationally reasonably cheap method. This study shows the results for a NACA 0012 airfoil. The two applied solvers converge to the experimental values when the grid is refined. We also show that in separation the eigenvalues remain positive thus avoiding the Goldstein singularity at separation. In 3D we show a flow over a dent in which separation occurs. A rotating flat plat is used to show the applicability of the method for rotating flows. The shown capabilities of the method indicate that the quasi-simultaneous interaction method is suitable for design methods for offshore wind turbine blades.

  14. Distance scaling method for accurate prediction of slowly varying magnetic fields in satellite missions

    NASA Astrophysics Data System (ADS)

    Zacharias, Panagiotis P.; Chatzineofytou, Elpida G.; Spantideas, Sotirios T.; Capsalis, Christos N.

    2016-07-01

    In the present work, the determination of the magnetic behavior of localized magnetic sources from near-field measurements is examined. The distance power law of the magnetic field fall-off is used in various cases to accurately predict the magnetic signature of an equipment under test (EUT) consisting of multiple alternating current (AC) magnetic sources. Therefore, parameters concerning the location of the observation points (magnetometers) are studied towards this scope. The results clearly show that these parameters are independent of the EUT's size and layout. Additionally, the techniques developed in the present study enable the placing of the magnetometers close to the EUT, thus achieving high signal-to-noise ratio (SNR). Finally, the proposed method is verified by real measurements, using a mobile phone as an EUT.

  15. In vitro transcription accurately predicts lac repressor phenotype in vivo in Escherichia coli

    PubMed Central

    2014-01-01

    A multitude of studies have looked at the in vivo and in vitro behavior of the lac repressor binding to DNA and effector molecules in order to study transcriptional repression, however these studies are not always reconcilable. Here we use in vitro transcription to directly mimic the in vivo system in order to build a self consistent set of experiments to directly compare in vivo and in vitro genetic repression. A thermodynamic model of the lac repressor binding to operator DNA and effector is used to link DNA occupancy to either normalized in vitro mRNA product or normalized in vivo fluorescence of a regulated gene, YFP. An accurate measurement of repressor, DNA and effector concentrations were made both in vivo and in vitro allowing for direct modeling of the entire thermodynamic equilibrium. In vivo repression profiles are accurately predicted from the given in vitro parameters when molecular crowding is considered. Interestingly, our measured repressor–operator DNA affinity differs significantly from previous in vitro measurements. The literature values are unable to replicate in vivo binding data. We therefore conclude that the repressor-DNA affinity is much weaker than previously thought. This finding would suggest that in vitro techniques that are specifically designed to mimic the in vivo process may be necessary to replicate the native system. PMID:25097824

  16. Measuring solar reflectance Part I: Defining a metric that accurately predicts solar heat gain

    SciTech Connect

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul

    2010-05-14

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective 'cool colored' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland U.S. latitudes, this metric RE891BN can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {le} 5:12 [23{sup o}]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool-roof net energy savings by as much as 23%. We define clear-sky air mass one global horizontal ('AM1GH') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer.

  17. Measuring solar reflectance - Part I: Defining a metric that accurately predicts solar heat gain

    SciTech Connect

    Levinson, Ronnen; Akbari, Hashem; Berdahl, Paul

    2010-09-15

    Solar reflectance can vary with the spectral and angular distributions of incident sunlight, which in turn depend on surface orientation, solar position and atmospheric conditions. A widely used solar reflectance metric based on the ASTM Standard E891 beam-normal solar spectral irradiance underestimates the solar heat gain of a spectrally selective ''cool colored'' surface because this irradiance contains a greater fraction of near-infrared light than typically found in ordinary (unconcentrated) global sunlight. At mainland US latitudes, this metric R{sub E891BN} can underestimate the annual peak solar heat gain of a typical roof or pavement (slope {<=} 5:12 [23 ]) by as much as 89 W m{sup -2}, and underestimate its peak surface temperature by up to 5 K. Using R{sub E891BN} to characterize roofs in a building energy simulation can exaggerate the economic value N of annual cool roof net energy savings by as much as 23%. We define clear sky air mass one global horizontal (''AM1GH'') solar reflectance R{sub g,0}, a simple and easily measured property that more accurately predicts solar heat gain. R{sub g,0} predicts the annual peak solar heat gain of a roof or pavement to within 2 W m{sup -2}, and overestimates N by no more than 3%. R{sub g,0} is well suited to rating the solar reflectances of roofs, pavements and walls. We show in Part II that R{sub g,0} can be easily and accurately measured with a pyranometer, a solar spectrophotometer or version 6 of the Solar Spectrum Reflectometer. (author)

  18. Accurate prediction of solvent accessibility using neural networks-based regression.

    PubMed

    Adamczak, Rafał; Porollo, Aleksey; Meller, Jarosław

    2004-09-01

    Accurate prediction of relative solvent accessibilities (RSAs) of amino acid residues in proteins may be used to facilitate protein structure prediction and functional annotation. Toward that goal we developed a novel method for improved prediction of RSAs. Contrary to other machine learning-based methods from the literature, we do not impose a classification problem with arbitrary boundaries between the classes. Instead, we seek a continuous approximation of the real-value RSA using nonlinear regression, with several feed forward and recurrent neural networks, which are then combined into a consensus predictor. A set of 860 protein structures derived from the PFAM database was used for training, whereas validation of the results was carefully performed on several nonredundant control sets comprising a total of 603 structures derived from new Protein Data Bank structures and had no homology to proteins included in the training. Two classes of alternative predictors were developed for comparison with the regression-based approach: one based on the standard classification approach and the other based on a semicontinuous approximation with the so-called thermometer encoding. Furthermore, a weighted approximation, with errors being scaled by the observed levels of variability in RSA for equivalent residues in families of homologous structures, was applied in order to improve the results. The effects of including evolutionary profiles and the growth of sequence databases were assessed. In accord with the observed levels of variability in RSA for different ranges of RSA values, the regression accuracy is higher for buried than for exposed residues, with overall 15.3-15.8% mean absolute errors and correlation coefficients between the predicted and experimental values of 0.64-0.67 on different control sets. The new method outperforms classification-based algorithms when the real value predictions are projected onto two-class classification problems with several commonly

  19. TIMP2•IGFBP7 biomarker panel accurately predicts acute kidney injury in high-risk surgical patients

    PubMed Central

    Gunnerson, Kyle J.; Shaw, Andrew D.; Chawla, Lakhmir S.; Bihorac, Azra; Al-Khafaji, Ali; Kashani, Kianoush; Lissauer, Matthew; Shi, Jing; Walker, Michael G.; Kellum, John A.

    2016-01-01

    BACKGROUND Acute kidney injury (AKI) is an important complication in surgical patients. Existing biomarkers and clinical prediction models underestimate the risk for developing AKI. We recently reported data from two trials of 728 and 408 critically ill adult patients in whom urinary TIMP2•IGFBP7 (NephroCheck, Astute Medical) was used to identify patients at risk of developing AKI. Here we report a preplanned analysis of surgical patients from both trials to assess whether urinary tissue inhibitor of metalloproteinase 2 (TIMP-2) and insulin-like growth factor–binding protein 7 (IGFBP7) accurately identify surgical patients at risk of developing AKI. STUDY DESIGN We enrolled adult surgical patients at risk for AKI who were admitted to one of 39 intensive care units across Europe and North America. The primary end point was moderate-severe AKI (equivalent to KDIGO [Kidney Disease Improving Global Outcomes] stages 2–3) within 12 hours of enrollment. Biomarker performance was assessed using the area under the receiver operating characteristic curve, integrated discrimination improvement, and category-free net reclassification improvement. RESULTS A total of 375 patients were included in the final analysis of whom 35 (9%) developed moderate-severe AKI within 12 hours. The area under the receiver operating characteristic curve for [TIMP-2]•[IGFBP7] alone was 0.84 (95% confidence interval, 0.76–0.90; p < 0.0001). Biomarker performance was robust in sensitivity analysis across predefined subgroups (urgency and type of surgery). CONCLUSION For postoperative surgical intensive care unit patients, a single urinary TIMP2•IGFBP7 test accurately identified patients at risk for developing AKI within the ensuing 12 hours and its inclusion in clinical risk prediction models significantly enhances their performance. LEVEL OF EVIDENCE Prognostic study, level I. PMID:26816218

  20. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.

    PubMed

    Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish

    2016-04-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  1. Accurate load prediction by BEM with airfoil data from 3D RANS simulations

    NASA Astrophysics Data System (ADS)

    Schneider, Marc S.; Nitzsche, Jens; Hennings, Holger

    2016-09-01

    In this paper, two methods for the extraction of airfoil coefficients from 3D CFD simulations of a wind turbine rotor are investigated, and these coefficients are used to improve the load prediction of a BEM code. The coefficients are extracted from a number of steady RANS simulations, using either averaging of velocities in annular sections, or an inverse BEM approach for determination of the induction factors in the rotor plane. It is shown that these 3D rotor polars are able to capture the rotational augmentation at the inner part of the blade as well as the load reduction by 3D effects close to the blade tip. They are used as input to a simple BEM code and the results of this BEM with 3D rotor polars are compared to the predictions of BEM with 2D airfoil coefficients plus common empirical corrections for stall delay and tip loss. While BEM with 2D airfoil coefficients produces a very different radial distribution of loads than the RANS simulation, the BEM with 3D rotor polars manages to reproduce the loads from RANS very accurately for a variety of load cases, as long as the blade pitch angle is not too different from the cases from which the polars were extracted.

  2. A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina

    PubMed Central

    Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish

    2016-01-01

    Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143

  3. ChIP-seq Accurately Predicts Tissue-Specific Activity of Enhancers

    SciTech Connect

    Visel, Axel; Blow, Matthew J.; Li, Zirong; Zhang, Tao; Akiyama, Jennifer A.; Holt, Amy; Plajzer-Frick, Ingrid; Shoukry, Malak; Wright, Crystal; Chen, Feng; Afzal, Veena; Ren, Bing; Rubin, Edward M.; Pennacchio, Len A.

    2009-02-01

    A major yet unresolved quest in decoding the human genome is the identification of the regulatory sequences that control the spatial and temporal expression of genes. Distant-acting transcriptional enhancers are particularly challenging to uncover since they are scattered amongst the vast non-coding portion of the genome. Evolutionary sequence constraint can facilitate the discovery of enhancers, but fails to predict when and where they are active in vivo. Here, we performed chromatin immunoprecipitation with the enhancer-associated protein p300, followed by massively-parallel sequencing, to map several thousand in vivo binding sites of p300 in mouse embryonic forebrain, midbrain, and limb tissue. We tested 86 of these sequences in a transgenic mouse assay, which in nearly all cases revealed reproducible enhancer activity in those tissues predicted by p300 binding. Our results indicate that in vivo mapping of p300 binding is a highly accurate means for identifying enhancers and their associated activities and suggest that such datasets will be useful to study the role of tissue-specific enhancers in human biology and disease on a genome-wide scale.

  4. Accurate First-Principles Spectra Predictions for Planetological and Astrophysical Applications at Various T-Conditions

    NASA Astrophysics Data System (ADS)

    Rey, M.; Nikitin, A. V.; Tyuterev, V.

    2014-06-01

    Knowledge of near infrared intensities of rovibrational transitions of polyatomic molecules is essential for the modeling of various planetary atmospheres, brown dwarfs and for other astrophysical applications 1,2,3. For example, to analyze exoplanets, atmospheric models have been developed, thus making the need to provide accurate spectroscopic data. Consequently, the spectral characterization of such planetary objects relies on the necessity of having adequate and reliable molecular data in extreme conditions (temperature, optical path length, pressure). On the other hand, in the modeling of astrophysical opacities, millions of lines are generally involved and the line-by-line extraction is clearly not feasible in laboratory measurements. It is thus suggested that this large amount of data could be interpreted only by reliable theoretical predictions. There exists essentially two theoretical approaches for the computation and prediction of spectra. The first one is based on empirically-fitted effective spectroscopic models. Another way for computing energies, line positions and intensities is based on global variational calculations using ab initio surfaces. They do not yet reach the spectroscopic accuracy stricto sensu but implicitly account for all intramolecular interactions including resonance couplings in a wide spectral range. The final aim of this work is to provide reliable predictions which could be quantitatively accurate with respect to the precision of available observations and as complete as possible. All this thus requires extensive first-principles quantum mechanical calculations essentially based on three necessary ingredients which are (i) accurate intramolecular potential energy surface and dipole moment surface components well-defined in a large range of vibrational displacements and (ii) efficient computational methods combined with suitable choices of coordinates to account for molecular symmetry properties and to achieve a good numerical

  5. Development of a New Model for Accurate Prediction of Cloud Water Deposition on Vegetation

    NASA Astrophysics Data System (ADS)

    Katata, G.; Nagai, H.; Wrzesinsky, T.; Klemm, O.; Eugster, W.; Burkard, R.

    2006-12-01

    Scarcity of water resources in arid and semi-arid areas is of great concern in the light of population growth and food shortages. Several experiments focusing on cloud (fog) water deposition on the land surface suggest that cloud water plays an important role in water resource in such regions. A one-dimensional vegetation model including the process of cloud water deposition on vegetation has been developed to better predict cloud water deposition on the vegetation. New schemes to calculate capture efficiency of leaf, cloud droplet size distribution, and gravitational flux of cloud water were incorporated in the model. Model calculations were compared with the data acquired at the Norway spruce forest at the Waldstein site, Germany. High performance of the model was confirmed by comparisons of calculated net radiation, sensible and latent heat, and cloud water fluxes over the forest with measurements. The present model provided a better prediction of measured turbulent and gravitational fluxes of cloud water over the canopy than the Lovett model, which is a commonly used cloud water deposition model. Detailed calculations of evapotranspiration and of turbulent exchange of heat and water vapor within the canopy and the modifications are necessary for accurate prediction of cloud water deposition. Numerical experiments to examine the dependence of cloud water deposition on the vegetation species (coniferous and broad-leaved trees, flat and cylindrical grasses) and structures (Leaf Area Index (LAI) and canopy height) are performed using the presented model. The results indicate that the differences of leaf shape and size have a large impact on cloud water deposition. Cloud water deposition also varies with the growth of vegetation and seasonal change of LAI. We found that the coniferous trees whose height and LAI are 24 m and 2.0 m2m-2, respectively, produce the largest amount of cloud water deposition in all combinations of vegetation species and structures in the

  6. nuMap: a web platform for accurate prediction of nucleosome positioning.

    PubMed

    Alharbi, Bader A; Alshammari, Thamir H; Felton, Nathan L; Zhurkin, Victor B; Cui, Feng

    2014-10-01

    Nucleosome positioning is critical for gene expression and of major biological interest. The high cost of experimentally mapping nucleosomal arrangement signifies the need for computational approaches to predict nucleosome positions at high resolution. Here, we present a web-based application to fulfill this need by implementing two models, YR and W/S schemes, for the translational and rotational positioning of nucleosomes, respectively. Our methods are based on sequence-dependent anisotropic bending that dictates how DNA is wrapped around a histone octamer. This application allows users to specify a number of options such as schemes and parameters for threading calculation and provides multiple layout formats. The nuMap is implemented in Java/Perl/MySQL and is freely available for public use at http://numap.rit.edu. The user manual, implementation notes, description of the methodology and examples are available at the site. PMID:25220945

  7. A Foundation for the Accurate Prediction of the Soft Error Vulnerability of Scientific Applications

    SciTech Connect

    Bronevetsky, G; de Supinski, B; Schulz, M

    2009-02-13

    Understanding the soft error vulnerability of supercomputer applications is critical as these systems are using ever larger numbers of devices that have decreasing feature sizes and, thus, increasing frequency of soft errors. As many large scale parallel scientific applications use BLAS and LAPACK linear algebra routines, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. This paper analyzes the vulnerability of these routines to soft errors by characterizing how their outputs are affected by injected errors and by evaluating several techniques for predicting how errors propagate from the input to the output of each routine. The resulting error profiles can be used to understand the fault vulnerability of full applications that use these routines.

  8. Automated methods for accurate determination of the critical velocity of packed bed chromatography.

    PubMed

    Chang, Yu-Chih; Gerontas, Spyridon; Titchener-Hooker, Nigel J

    2012-01-01

    Knowing the critical velocity (ucrit) of a chromatography column is an important part of process development as it allows the optimization of chromatographic flow conditions. The conventional flow step method for determining ucrit is prone to error as it depends heavily on human judgment. In this study, two automated methods for determining ucrit have been developed: the automatic flow step (AFS) method and the automatic pressure step (APS) method. In the AFS method, the column pressure drop is monitored upon application of automated incremental increases in flow velocity, whereas in the APS method the flow velocity is monitored upon application of automated incremental increases in pressure drop. The APS method emerged as the one with the higher levels of accuracy, efficiency and ease of application having the greater potential to assist defining the best operational parameters of a chromatography column.

  9. Diagnostic methodology is critical for accurately determining the prevalence of ichthyophonus infections in wild fish populations

    USGS Publications Warehouse

    Kocan, R.; Dolan, H.; Hershberger, P.

    2011-01-01

    Several different techniques have been employed to detect and identify Ichthyophonus spp. in infected fish hosts; these include macroscopic observation, microscopic examination of tissue squashes, histological evaluation, in vitro culture, and molecular techniques. Examination of the peer-reviewed literature revealed that when more than 1 diagnostic method is used, they often result in significantly different results; for example, when in vitro culture was used to identify infected trout in an experimentally exposed population, 98.7% of infected trout were detected, but when standard histology was used to confirm known infected tissues from wild salmon, it detected ~50% of low-intensity infections and ~85% of high-intensity infections. Other studies on different species reported similar differences. When we examined a possible mechanism to explain the disparity between different diagnostic techniques, we observed non-random distribution of the parasite in 3-dimensionally visualized tissue sections from infected hosts, thus providing a possible explanation for the different sensitivities of commonly used diagnostic techniques. Based on experimental evidence and a review of the peer-reviewed literature, we have concluded that in vitro culture is currently the most accurate diagnostic technique for determining infection prevalence of Ichthyophonus, particularly when the exposure history of the population is not known.

  10. Predicting accurate fluorescent spectra for high molecular weight polycyclic aromatic hydrocarbons using density functional theory

    NASA Astrophysics Data System (ADS)

    Powell, Jacob; Heider, Emily C.; Campiglia, Andres; Harper, James K.

    2016-10-01

    The ability of density functional theory (DFT) methods to predict accurate fluorescence spectra for polycyclic aromatic hydrocarbons (PAHs) is explored. Two methods, PBE0 and CAM-B3LYP, are evaluated both in the gas phase and in solution. Spectra for several of the most toxic PAHs are predicted and compared to experiment, including three isomers of C24H14 and a PAH containing heteroatoms. Unusually high-resolution experimental spectra are obtained for comparison by analyzing each PAH at 4.2 K in an n-alkane matrix. All theoretical spectra visually conform to the profiles of the experimental data but are systematically offset by a small amount. Specifically, when solvent is included the PBE0 functional overestimates peaks by 16.1 ± 6.6 nm while CAM-B3LYP underestimates the same transitions by 14.5 ± 7.6 nm. These calculated spectra can be empirically corrected to decrease the uncertainties to 6.5 ± 5.1 and 5.7 ± 5.1 nm for the PBE0 and CAM-B3LYP methods, respectively. A comparison of computed spectra in the gas phase indicates that the inclusion of n-octane shifts peaks by +11 nm on average and this change is roughly equivalent for PBE0 and CAM-B3LYP. An automated approach for comparing spectra is also described that minimizes residuals between a given theoretical spectrum and all available experimental spectra. This approach identifies the correct spectrum in all cases and excludes approximately 80% of the incorrect spectra, demonstrating that an automated search of theoretical libraries of spectra may eventually become feasible.

  11. Prediction Of Critical Crack Sizes In Solar Cells

    NASA Technical Reports Server (NTRS)

    Chen, Chern P.

    1989-01-01

    Report presents theoretical analysis of cracking in Si and GaAs solar photovoltaic cells subjected to bending or twisting. Analysis also extended to predict critical sizes for cracks in Ge substrate coated with thin film of GaAs. Analysis leads to general conclusions. Approach and results of study useful in development of guidelines for acceptance or rejection of slightly flawed cells during manufacture.

  12. How accurately can we predict the melting points of drug-like compounds?

    PubMed

    Tetko, Igor V; Sushko, Yurii; Novotarskyi, Sergii; Patiny, Luc; Kondratov, Ivan; Petrenko, Alexander E; Charochkina, Larisa; Asiri, Abdullah M

    2014-12-22

    This article contributes a highly accurate model for predicting the melting points (MPs) of medicinal chemistry compounds. The model was developed using the largest published data set, comprising more than 47k compounds. The distributions of MPs in drug-like and drug lead sets showed that >90% of molecules melt within [50,250]°C. The final model calculated an RMSE of less than 33 °C for molecules from this temperature interval, which is the most important for medicinal chemistry users. This performance was achieved using a consensus model that performed calculations to a significantly higher accuracy than the individual models. We found that compounds with reactive and unstable groups were overrepresented among outlying compounds. These compounds could decompose during storage or measurement, thus introducing experimental errors. While filtering the data by removing outliers generally increased the accuracy of individual models, it did not significantly affect the results of the consensus models. Three analyzed distance to models did not allow us to flag molecules, which had MP values fell outside the applicability domain of the model. We believe that this negative result and the public availability of data from this article will encourage future studies to develop better approaches to define the applicability domain of models. The final model, MP data, and identified reactive groups are available online at http://ochem.eu/article/55638.

  13. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).

  14. Fast and Accurate Prediction of Numerical Relativity Waveforms from Binary Black Hole Coalescences Using Surrogate Models.

    PubMed

    Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A

    2015-09-18

    Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases). PMID:26430979

  15. How accurately can we predict the melting points of drug-like compounds?

    PubMed

    Tetko, Igor V; Sushko, Yurii; Novotarskyi, Sergii; Patiny, Luc; Kondratov, Ivan; Petrenko, Alexander E; Charochkina, Larisa; Asiri, Abdullah M

    2014-12-22

    This article contributes a highly accurate model for predicting the melting points (MPs) of medicinal chemistry compounds. The model was developed using the largest published data set, comprising more than 47k compounds. The distributions of MPs in drug-like and drug lead sets showed that >90% of molecules melt within [50,250]°C. The final model calculated an RMSE of less than 33 °C for molecules from this temperature interval, which is the most important for medicinal chemistry users. This performance was achieved using a consensus model that performed calculations to a significantly higher accuracy than the individual models. We found that compounds with reactive and unstable groups were overrepresented among outlying compounds. These compounds could decompose during storage or measurement, thus introducing experimental errors. While filtering the data by removing outliers generally increased the accuracy of individual models, it did not significantly affect the results of the consensus models. Three analyzed distance to models did not allow us to flag molecules, which had MP values fell outside the applicability domain of the model. We believe that this negative result and the public availability of data from this article will encourage future studies to develop better approaches to define the applicability domain of models. The final model, MP data, and identified reactive groups are available online at http://ochem.eu/article/55638. PMID:25489863

  16. A survey of factors contributing to accurate theoretical predictions of atomization energies and molecular structures

    NASA Astrophysics Data System (ADS)

    Feller, David; Peterson, Kirk A.; Dixon, David A.

    2008-11-01

    High level electronic structure predictions of thermochemical properties and molecular structure are capable of accuracy rivaling the very best experimental measurements as a result of rapid advances in hardware, software, and methodology. Despite the progress, real world limitations require practical approaches designed for handling general chemical systems that rely on composite strategies in which a single, intractable calculation is replaced by a series of smaller calculations. As typically implemented, these approaches produce a final, or "best," estimate that is constructed from one major component, fine-tuned by multiple corrections that are assumed to be additive. Though individually much smaller than the original, unmanageable computational problem, these corrections are nonetheless extremely costly. This study presents a survey of the widely varying magnitude of the most important components contributing to the atomization energies and structures of 106 small molecules. It combines large Gaussian basis sets and coupled cluster theory up to quadruple excitations for all systems. In selected cases, the effects of quintuple excitations and/or full configuration interaction were also considered. The availability of reliable experimental data for most of the molecules permits an expanded statistical analysis of the accuracy of the approach. In cases where reliable experimental information is currently unavailable, the present results are expected to provide some of the most accurate benchmark values available.

  17. Accurate prediction of V1 location from cortical folds in a surface coordinate system

    PubMed Central

    Hinds, Oliver P.; Rajendran, Niranjini; Polimeni, Jonathan R.; Augustinack, Jean C.; Wiggins, Graham; Wald, Lawrence L.; Rosas, H. Diana; Potthast, Andreas; Schwartz, Eric L.; Fischl, Bruce

    2008-01-01

    Previous studies demonstrated substantial variability of the location of primary visual cortex (V1) in stereotaxic coordinates when linear volume-based registration is used to match volumetric image intensities (Amunts et al., 2000). However, other qualitative reports of V1 location (Smith, 1904; Stensaas et al., 1974; Rademacher et al., 1993) suggested a consistent relationship between V1 and the surrounding cortical folds. Here, the relationship between folds and the location of V1 is quantified using surface-based analysis to generate a probabilistic atlas of human V1. High-resolution (about 200 μm) magnetic resonance imaging (MRI) at 7 T of ex vivo human cerebral hemispheres allowed identification of the full area via the stria of Gennari: a myeloarchitectonic feature specific to V1. Separate, whole-brain scans were acquired using MRI at 1.5 T to allow segmentation and mesh reconstruction of the cortical gray matter. For each individual, V1 was manually identified in the high-resolution volume and projected onto the cortical surface. Surface-based intersubject registration (Fischl et al., 1999b) was performed to align the primary cortical folds of individual hemispheres to those of a reference template representing the average folding pattern. An atlas of V1 location was constructed by computing the probability of V1 inclusion for each cortical location in the template space. This probabilistic atlas of V1 exhibits low prediction error compared to previous V1 probabilistic atlases built in volumetric coordinates. The increased predictability observed under surface-based registration suggests that the location of V1 is more accurately predicted by the cortical folds than by the shape of the brain embedded in the volume of the skull. In addition, the high quality of this atlas provides direct evidence that surface-based intersubject registration methods are superior to volume-based methods at superimposing functional areas of cortex, and therefore are better

  18. Unilateral Prostate Cancer Cannot be Accurately Predicted in Low-Risk Patients

    SciTech Connect

    Isbarn, Hendrik; Karakiewicz, Pierre I.; Vogel, Susanne

    2010-07-01

    Purpose: Hemiablative therapy (HAT) is increasing in popularity for treatment of patients with low-risk prostate cancer (PCa). The validity of this therapeutic modality, which exclusively treats PCa within a single prostate lobe, rests on accurate staging. We tested the accuracy of unilaterally unremarkable biopsy findings in cases of low-risk PCa patients who are potential candidates for HAT. Methods and Materials: The study population consisted of 243 men with clinical stage {<=}T2a, a prostate-specific antigen (PSA) concentration of <10 ng/ml, a biopsy-proven Gleason sum of {<=}6, and a maximum of 2 ipsilateral positive biopsy results out of 10 or more cores. All men underwent a radical prostatectomy, and pathology stage was used as the gold standard. Univariable and multivariable logistic regression models were tested for significant predictors of unilateral, organ-confined PCa. These predictors consisted of PSA, %fPSA (defined as the quotient of free [uncomplexed] PSA divided by the total PSA), clinical stage (T2a vs. T1c), gland volume, and number of positive biopsy cores (2 vs. 1). Results: Despite unilateral stage at biopsy, bilateral or even non-organ-confined PCa was reported in 64% of all patients. In multivariable analyses, no variable could clearly and independently predict the presence of unilateral PCa. This was reflected in an overall accuracy of 58% (95% confidence interval, 50.6-65.8%). Conclusions: Two-thirds of patients with unilateral low-risk PCa, confirmed by clinical stage and biopsy findings, have bilateral or non-organ-confined PCa at radical prostatectomy. This alarming finding questions the safety and validity of HAT.

  19. Improving DOE-2's RESYS routine: User defined functions to provide more accurate part load energy use and humidity predictions

    SciTech Connect

    Henderson, Hugh I.; Parker, Danny; Huang, Yu J.

    2000-08-04

    In hourly energy simulations, it is important to properly predict the performance of air conditioning systems over a range of full and part load operating conditions. An important component of these calculations is to properly consider the performance of the cycling air conditioner and how it interacts with the building. This paper presents improved approaches to properly account for the part load performance of residential and light commercial air conditioning systems in DOE-2. First, more accurate correlations are given to predict the degradation of system efficiency at part load conditions. In addition, a user-defined function for RESYS is developed that provides improved predictions of air conditioner sensible and latent capacity at part load conditions. The user function also provides more accurate predictions of space humidity by adding ''lumped'' moisture capacitance into the calculations. The improved cooling coil model and the addition of moisture capacitance predicts humidity swings that are more representative of the performance observed in real buildings.

  20. Predictive equations for energy needs for the critically ill.

    PubMed

    Walker, Renee N; Heuberger, Roschelle A

    2009-04-01

    Nutrition may affect clinical outcomes in critically ill patients, and providing either more or fewer calories than the patient needs can adversely affect outcomes. Calorie need fluctuates substantially over the course of critical illness, and nutrition delivery is often influenced by: the risk of refeeding syndrome; a hypocaloric feeding regimen; lack of feeding access; intolerance of feeding; and feeding-delay for procedures. Lean body mass is the strongest determinant of resting energy expenditure, but age, sex, medications, and metabolic stress also influence the calorie requirement. Indirect calorimetry is the accepted standard for determining calorie requirement, but is unavailable or unaffordable in many centers. Moreover, indirect calorimetry is not infallible and care must be taken when interpreting the results. In the absence of calorimetry, clinicians use equations and clinical judgment to estimate calorie need. We reviewed 7 equations (American College of Chest Physicians, Harris-Benedict, Ireton-Jones 1992 and 1997, Penn State 1998 and 2003, Swinamer 1990) and their prediction accuracy. Understanding an equation's reference population and using the equation with similar patients are essential for the equation to perform similarly. Prediction accuracy among equations is rarely within 10% of the measured energy expenditure; however, in the absence of indirect calorimetry, a prediction equation is the best alternative.

  1. PSSP-RFE: Accurate Prediction of Protein Structural Class by Recursive Feature Extraction from PSI-BLAST Profile, Physical-Chemical Property and Functional Annotations

    PubMed Central

    Yu, Sanjiu; Zhang, Yuan; Luo, Zhong; Yang, Hua; Zhou, Yue; Zheng, Xiaoqi

    2014-01-01

    Protein structure prediction is critical to functional annotation of the massively accumulated biological sequences, which prompts an imperative need for the development of high-throughput technologies. As a first and key step in protein structure prediction, protein structural class prediction becomes an increasingly challenging task. Amongst most homological-based approaches, the accuracies of protein structural class prediction are sufficiently high for high similarity datasets, but still far from being satisfactory for low similarity datasets, i.e., below 40% in pairwise sequence similarity. Therefore, we present a novel method for accurate and reliable protein structural class prediction for both high and low similarity datasets. This method is based on Support Vector Machine (SVM) in conjunction with integrated features from position-specific score matrix (PSSM), PROFEAT and Gene Ontology (GO). A feature selection approach, SVM-RFE, is also used to rank the integrated feature vectors through recursively removing the feature with the lowest ranking score. The definitive top features selected by SVM-RFE are input into the SVM engines to predict the structural class of a query protein. To validate our method, jackknife tests were applied to seven widely used benchmark datasets, reaching overall accuracies between 84.61% and 99.79%, which are significantly higher than those achieved by state-of-the-art tools. These results suggest that our method could serve as an accurate and cost-effective alternative to existing methods in protein structural classification, especially for low similarity datasets. PMID:24675610

  2. Covariance matrices for use in criticality safety predictability studies

    SciTech Connect

    Derrien, H.; Larson, N.M.; Leal, L.C.

    1997-09-01

    Criticality predictability applications require as input the best available information on fissile and other nuclides. In recent years important work has been performed in the analysis of neutron transmission and cross-section data for fissile nuclei in the resonance region by using the computer code SAMMY. The code uses Bayes method (a form of generalized least squares) for sequential analyses of several sets of experimental data. Values for Reich-Moore resonance parameters, their covariances, and the derivatives with respect to the adjusted parameters (data sensitivities) are obtained. In general, the parameter file contains several thousand values and the dimension of the covariance matrices is correspondingly large. These matrices are not reported in the current evaluated data files due to their large dimensions and to the inadequacy of the file formats. The present work has two goals: the first is to calculate the covariances of group-averaged cross sections from the covariance files generated by SAMMY, because these can be more readily utilized in criticality predictability calculations. The second goal is to propose a more practical interface between SAMMY and the evaluated files. Examples are given for {sup 235}U in the popular 199- and 238-group structures, using the latest ORNL evaluation of the {sup 235}U resonance parameters.

  3. Smoothelin and caldesmon are reliable markers for distinguishing muscularis propria from desmoplasia: a critical distinction for accurate staging colorectal adenocarcinoma

    PubMed Central

    Roberts, Jordan A; Waters, Lindsay; Ro, Jae Y; Zhai, Qihui Jim

    2014-01-01

    An accurate distinction between deep muscularis propria invasion versus subserosal invasion by colonic adenocarcinoma is essential for the accurate staging of cancer and subsequent optimal patient management. However, problems may arise in pathologic staging when extensive desmoplasia blurs the junction between deep muscularis propria and subserosal fibroadipose tissue. To address this issue, forty-three (43) cases of colonic adenocarcinoma resections from 2007-2009 at The Methodist Hospital in Houston, TX were reviewed. These cases were selected to address possible challenges in differentiating deep muscularis propria invasion from superficial subserosal invasion based on H&E staining alone. Immunohistochemical staining using smooth muscle actin (SMA), smoothelin, and caldesmon were performed on 51 cases: 8 cases of pT1 tumors (used mainly as control); 12 pT2 tumors; and 31 pT3 tumors. All 51 (100%) had diffuse, strong (3+) immunoreactivity for caldesmon and smoothelin in the muscularis propria with a granular cytoplasmic staining pattern. However, the desmoplastic areas of these tumors, composed of spindled fibroblasts and myofibroblasts, showed negative immunostaining for caldesmon and smoothelin (0/35). SMA strongly stained the muscularis propria and weakly (1+) or moderately (2+) stained the spindled fibroblasts in the desmoplastic areas (the latter presumably because of myofibroblastic differentiation). Compared to SMA, caldesmon and smoothelin are more specific stains that allow better delineation of the muscularis propria from the desmoplastic stromal reaction which provides a critical aide for proper staging of colonic adenocarcinoma and subsequent patient care. PMID:24551305

  4. Accurate prediction model of bead geometry in crimping butt of the laser brazing using generalized regression neural network

    NASA Astrophysics Data System (ADS)

    Rong, Y. M.; Chang, Y.; Huang, Y.; Zhang, G. J.; Shao, X. Y.

    2015-12-01

    There are few researches that concentrate on the prediction of the bead geometry for laser brazing with crimping butt. This paper addressed the accurate prediction of the bead profile by developing a generalized regression neural network (GRNN) algorithm. Firstly GRNN model was developed and trained to decrease the prediction error that may be influenced by the sample size. Then the prediction accuracy was demonstrated by comparing with other articles and back propagation artificial neural network (BPNN) algorithm. Eventually the reliability and stability of GRNN model were discussed from the points of average relative error (ARE), mean square error (MSE) and root mean square error (RMSE), while the maximum ARE and MSE were 6.94% and 0.0303 that were clearly less than those (14.28% and 0.0832) predicted by BPNN. Obviously, it was proved that the prediction accuracy was improved at least 2 times, and the stability was also increased much more.

  5. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses: Criticality (keff) Predictions

    SciTech Connect

    Scaglione, John M.; Mueller, Don E.; Wagner, John C.

    2014-12-01

    One of the most important remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation—in particular, the availability and use of applicable measured data to support validation, especially for fission products (FPs). Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. In this study, this paper describes a validation approach for commercial spent nuclear fuel (SNF) criticality safety (keff) evaluations based on best-available data and methods and applies the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The criticality validation approach utilizes not only available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion program to support validation of the principal actinides but also calculated sensitivities, nuclear data uncertainties, and limited available FP LCE data to predict and verify individual biases for relevant minor actinides and FPs. The results demonstrate that (a) sufficient critical experiment data exist to adequately validate keff calculations via conventional validation approaches for the primary actinides, (b) sensitivity-based critical experiment selection is more appropriate for generating accurate application model bias and uncertainty, and (c) calculated sensitivities and nuclear data uncertainties can be used for generating conservative estimates of bias for minor actinides and FPs. Results based on the SCALE 6.1 and the ENDF/B-VII.0 cross-section libraries indicate that a conservative estimate of the bias for the minor actinides and FPs is 1.5% of their worth

  6. A simple accurate method to predict time of ponding under variable intensity rainfall

    NASA Astrophysics Data System (ADS)

    Assouline, S.; Selker, J. S.; Parlange, J.-Y.

    2007-03-01

    The prediction of the time to ponding following commencement of rainfall is fundamental to hydrologic prediction of flood, erosion, and infiltration. Most of the studies to date have focused on prediction of ponding resulting from simple rainfall patterns. This approach was suitable to rainfall reported as average values over intervals of up to a day but does not take advantage of knowledge of the complex patterns of actual rainfall now commonly recorded electronically. A straightforward approach to include the instantaneous rainfall record in the prediction of ponding time and excess rainfall using only the infiltration capacity curve is presented. This method is tested against a numerical solution of the Richards equation on the basis of an actual rainfall record. The predicted time to ponding showed mean error ≤7% for a broad range of soils, with and without surface sealing. In contrast, the standard predictions had average errors of 87%, and worst-case errors exceeding a factor of 10. In addition to errors intrinsic in the modeling framework itself, errors that arise from averaging actual rainfall records over reporting intervals were evaluated. Averaging actual rainfall records observed in Israel over periods of as little as 5 min significantly reduced predicted runoff (75% for the sealed sandy loam and 46% for the silty clay loam), while hourly averaging gave complete lack of prediction of ponding in some of the cases.

  7. Combining Evolutionary Information and an Iterative Sampling Strategy for Accurate Protein Structure Prediction.

    PubMed

    Braun, Tatjana; Koehler Leman, Julia; Lange, Oliver F

    2015-12-01

    Recent work has shown that the accuracy of ab initio structure prediction can be significantly improved by integrating evolutionary information in form of intra-protein residue-residue contacts. Following this seminal result, much effort is put into the improvement of contact predictions. However, there is also a substantial need to develop structure prediction protocols tailored to the type of restraints gained by contact predictions. Here, we present a structure prediction protocol that combines evolutionary information with the resolution-adapted structural recombination approach of Rosetta, called RASREC. Compared to the classic Rosetta ab initio protocol, RASREC achieves improved sampling, better convergence and higher robustness against incorrect distance restraints, making it the ideal sampling strategy for the stated problem. To demonstrate the accuracy of our protocol, we tested the approach on a diverse set of 28 globular proteins. Our method is able to converge for 26 out of the 28 targets and improves the average TM-score of the entire benchmark set from 0.55 to 0.72 when compared to the top ranked models obtained by the EVFold web server using identical contact predictions. Using a smaller benchmark, we furthermore show that the prediction accuracy of our method is only slightly reduced when the contact prediction accuracy is comparatively low. This observation is of special interest for protein sequences that only have a limited number of homologs.

  8. A machine learning approach to the accurate prediction of multi-leaf collimator positional errors

    NASA Astrophysics Data System (ADS)

    Carlson, Joel N. K.; Park, Jong Min; Park, So-Yeon; In Park, Jong; Choi, Yunseok; Ye, Sung-Joon

    2016-03-01

    Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD  =  1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be

  9. Predicting colloid transport through saturated porous media: A critical review

    NASA Astrophysics Data System (ADS)

    Molnar, Ian L.; Johnson, William P.; Gerhard, Jason I.; Willson, Clinton S.; O'Carroll, Denis M.

    2015-09-01

    Understanding and predicting colloid transport and retention in water-saturated porous media is important for the protection of human and ecological health. Early applications of colloid transport research before the 1990s included the removal of pathogens in granular drinking water filters. Since then, interest has expanded significantly to include such areas as source zone protection of drinking water systems and injection of nanometals for contaminated site remediation. This review summarizes predictive tools for colloid transport from the pore to field scales. First, we review experimental breakthrough and retention of colloids under favorable and unfavorable colloid/collector interactions (i.e., no significant and significant colloid-surface repulsion, respectively). Second, we review the continuum-scale modeling strategies used to describe observed transport behavior. Third, we review the following two components of colloid filtration theory: (i) mechanistic force/torque balance models of pore-scale colloid trajectories and (ii) approximating correlation equations used to predict colloid retention. The successes and limitations of these approaches for favorable conditions are summarized, as are recent developments to predict colloid retention under the unfavorable conditions particularly relevant to environmental applications. Fourth, we summarize the influences of physical and chemical heterogeneities on colloid transport and avenues for their prediction. Fifth, we review the upscaling of mechanistic model results to rate constants for use in continuum models of colloid behavior at the column and field scales. Overall, this paper clarifies the foundation for existing knowledge of colloid transport and retention, features recent advances in the field, critically assesses where existing approaches are successful and the limits of their application, and highlights outstanding challenges and future research opportunities. These challenges and opportunities

  10. Multi-omics integration accurately predicts cellular state in unexplored conditions for Escherichia coli

    PubMed Central

    Kim, Minseung; Rai, Navneet; Zorraquino, Violeta; Tagkopoulos, Ilias

    2016-01-01

    A significant obstacle in training predictive cell models is the lack of integrated data sources. We develop semi-supervised normalization pipelines and perform experimental characterization (growth, transcriptional, proteome) to create Ecomics, a consistent, quality-controlled multi-omics compendium for Escherichia coli with cohesive meta-data information. We then use this resource to train a multi-scale model that integrates four omics layers to predict genome-wide concentrations and growth dynamics. The genetic and environmental ontology reconstructed from the omics data is substantially different and complementary to the genetic and chemical ontologies. The integration of different layers confers an incremental increase in the prediction performance, as does the information about the known gene regulatory and protein-protein interactions. The predictive performance of the model ranges from 0.54 to 0.87 for the various omics layers, which far exceeds various baselines. This work provides an integrative framework of omics-driven predictive modelling that is broadly applicable to guide biological discovery. PMID:27713404

  11. Empirical approaches to more accurately predict benthic-pelagic coupling in biogeochemical ocean models

    NASA Astrophysics Data System (ADS)

    Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus

    2016-04-01

    The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?

  12. An endometrial gene expression signature accurately predicts recurrent implantation failure after IVF

    PubMed Central

    Koot, Yvonne E. M.; van Hooff, Sander R.; Boomsma, Carolien M.; van Leenen, Dik; Groot Koerkamp, Marian J. A.; Goddijn, Mariëtte; Eijkemans, Marinus J. C.; Fauser, Bart C. J. M.; Holstege, Frank C. P.; Macklon, Nick S.

    2016-01-01

    The primary limiting factor for effective IVF treatment is successful embryo implantation. Recurrent implantation failure (RIF) is a condition whereby couples fail to achieve pregnancy despite consecutive embryo transfers. Here we describe the collection of gene expression profiles from mid-luteal phase endometrial biopsies (n = 115) from women experiencing RIF and healthy controls. Using a signature discovery set (n = 81) we identify a signature containing 303 genes predictive of RIF. Independent validation in 34 samples shows that the gene signature predicts RIF with 100% positive predictive value (PPV). The strength of the RIF associated expression signature also stratifies RIF patients into distinct groups with different subsequent implantation success rates. Exploration of the expression changes suggests that RIF is primarily associated with reduced cellular proliferation. The gene signature will be of value in counselling and guiding further treatment of women who fail to conceive upon IVF and suggests new avenues for developing intervention. PMID:26797113

  13. Accurate ab initio prediction of NMR chemical shifts of nucleic acids and nucleic acids/protein complexes

    PubMed Central

    Victora, Andrea; Möller, Heiko M.; Exner, Thomas E.

    2014-01-01

    NMR chemical shift predictions based on empirical methods are nowadays indispensable tools during resonance assignment and 3D structure calculation of proteins. However, owing to the very limited statistical data basis, such methods are still in their infancy in the field of nucleic acids, especially when non-canonical structures and nucleic acid complexes are considered. Here, we present an ab initio approach for predicting proton chemical shifts of arbitrary nucleic acid structures based on state-of-the-art fragment-based quantum chemical calculations. We tested our prediction method on a diverse set of nucleic acid structures including double-stranded DNA, hairpins, DNA/protein complexes and chemically-modified DNA. Overall, our quantum chemical calculations yield highly/very accurate predictions with mean absolute deviations of 0.3–0.6 ppm and correlation coefficients (r2) usually above 0.9. This will allow for identifying misassignments and validating 3D structures. Furthermore, our calculations reveal that chemical shifts of protons involved in hydrogen bonding are predicted significantly less accurately. This is in part caused by insufficient inclusion of solvation effects. However, it also points toward shortcomings of current force fields used for structure determination of nucleic acids. Our quantum chemical calculations could therefore provide input for force field optimization. PMID:25404135

  14. Change in body mass accurately and reliably predicts change in body water after endurance exercise.

    PubMed

    Baker, Lindsay B; Lang, James A; Kenney, W Larry

    2009-04-01

    This study tested the hypothesis that the change in body mass (DeltaBM) accurately reflects the change in total body water (DeltaTBW) after prolonged exercise. Subjects (4 men, 4 women; 22-36 year; 66 +/- 10 kg) completed 2 h of interval running (70% VO(2max)) in the heat (30 degrees C), followed by a run to exhaustion (85% VO(2max)), and then sat for a 1 h recovery period. During exercise and recovery, subjects drank fluid or no fluid to maintain their BM, increase BM by 2%, or decrease BM by 2 or 4% in separate trials. Pre- and post-experiment TBW were determined using the deuterium oxide (D(2)O) dilution technique and corrected for D(2)O lost in urine, sweat, breath vapor, and nonaqueous hydrogen exchange. The average difference between DeltaBM and DeltaTBW was 0.07 +/- 1.07 kg (paired t test, P = 0.29). The slope and intercept of the relation between DeltaBM and DeltaTBW were not significantly different from 1 and 0, respectively. The intraclass correlation coefficient between DeltaBM and DeltaTBW was 0.76, which is indicative of excellent reliability between methods. Measuring pre- to post-exercise DeltaBM is an accurate and reliable method to assess the DeltaTBW.

  15. Towards Accurate Residue-Residue Hydrophobic Contact Prediction for Alpha Helical Proteins Via Integer Linear Optimization

    PubMed Central

    Rajgaria, R.; McAllister, S. R.; Floudas, C. A.

    2008-01-01

    A new optimization-based method is presented to predict the hydrophobic residue contacts in α-helical proteins. The proposed approach uses a high resolution distance dependent force field to calculate the interaction energy between different residues of a protein. The formulation predicts the hydrophobic contacts by minimizing the sum of these contact energies. These residue contacts are highly useful in narrowing down the conformational space searched by protein structure prediction algorithms. The proposed algorithm also offers the algorithmic advantage of producing a rank ordered list of the best contact sets. This model was tested on four independent α-helical protein test sets and was found to perform very well. The average accuracy of the predictions (separated by at least six residues) obtained using the presented method was approximately 66% for single domain proteins. The average true positive and false positive distances were also calculated for each protein test set and they are 8.87 Å and 14.67 Å respectively. PMID:18767158

  16. Accurate prediction of kidney allograft outcome based on creatinine course in the first 6 months posttransplant.

    PubMed

    Fritsche, L; Hoerstrup, J; Budde, K; Reinke, P; Neumayer, H-H; Frei, U; Schlaefer, A

    2005-03-01

    Most attempts to predict early kidney allograft loss are based on the patient and donor characteristics at baseline. We investigated how the early posttransplant creatinine course compares to baseline information in the prediction of kidney graft failure within the first 4 years after transplantation. Two approaches to create a prediction rule for early graft failure were evaluated. First, the whole data set was analysed using a decision-tree building software. The software, rpart, builds classification or regression models; the resulting models can be represented as binary trees. In the second approach, a Hill-Climbing algorithm was applied to define cut-off values for the median creatinine level and creatinine slope in the period between day 60 and 180 after transplantation. Of the 497 patients available for analysis, 52 (10.5%) experienced an early graft loss (graft loss within the first 4 years after transplantation). From the rpart algorithm, a single decision criterion emerged: Median creatinine value on days 60 to 180 higher than 3.1 mg/dL predicts early graft failure (accuracy 95.2% but sensitivity = 42.3%). In contrast, the Hill-Climbing algorithm delivered a cut-off of 1.8 mg/dL for the median creatinine level and a cut-off of 0.3 mg/dL per month for the creatinine slope (sensitivity = 69.5% and specificity 79.0%). Prediction rules based on median and slope of creatinine levels in the first half year after transplantation allow early identification of patients who are at risk of loosing their graft early after transplantation. These patients may benefit from therapeutic measures tailored for this high-risk setting. PMID:15848516

  17. Accurate Prediction of Transposon-Derived piRNAs by Integrating Various Sequential and Physicochemical Features

    PubMed Central

    Luo, Longqiang; Li, Dingfang; Zhang, Wen; Tu, Shikui; Zhu, Xiaopeng; Tian, Gang

    2016-01-01

    Background Piwi-interacting RNA (piRNA) is the largest class of small non-coding RNA molecules. The transposon-derived piRNA prediction can enrich the research contents of small ncRNAs as well as help to further understand generation mechanism of gamete. Methods In this paper, we attempt to differentiate transposon-derived piRNAs from non-piRNAs based on their sequential and physicochemical features by using machine learning methods. We explore six sequence-derived features, i.e. spectrum profile, mismatch profile, subsequence profile, position-specific scoring matrix, pseudo dinucleotide composition and local structure-sequence triplet elements, and systematically evaluate their performances for transposon-derived piRNA prediction. Finally, we consider two approaches: direct combination and ensemble learning to integrate useful features and achieve high-accuracy prediction models. Results We construct three datasets, covering three species: Human, Mouse and Drosophila, and evaluate the performances of prediction models by 10-fold cross validation. In the computational experiments, direct combination models achieve AUC of 0.917, 0.922 and 0.992 on Human, Mouse and Drosophila, respectively; ensemble learning models achieve AUC of 0.922, 0.926 and 0.994 on the three datasets. Conclusions Compared with other state-of-the-art methods, our methods can lead to better performances. In conclusion, the proposed methods are promising for the transposon-derived piRNA prediction. The source codes and datasets are available in S1 File. PMID:27074043

  18. A Mechanistic Approach for the Prediction of Critical Power in BWR Fuel Bundles

    NASA Astrophysics Data System (ADS)

    Chandraker, Dinesh Kumar; Vijayan, Pallipattu Krishnan; Sinha, Ratan Kumar; Aritomi, Masanori

    The critical power corresponding to the Critical Heat Flux (CHF) or dryout condition is an important design parameter for the evaluation of safety margins in a nuclear fuel bundle. The empirical approaches for the prediction of CHF in a rod bundle are highly geometric specific and proprietary in nature. The critical power experiments are very expensive and technically challenging owing to the stringent simulation requirements for the rod bundle tests involving radial and axial power profiles. In view of this, the mechanistic approach has gained momentum in the thermal hydraulic community. The Liquid Film Dryout (LFD) in an annular flow is the mechanism of CHF under BWR conditions and the dryout modeling has been found to predict the CHF quite accurately for a tubular geometry. The successful extension of the mechanistic model of dryout to the rod bundle application is vital for the evaluation of critical power in the rod bundle. The present work proposes the uniform film flow approach around the rod by analyzing individual film of the subchannel bounded by rods with different heat fluxes resulting in different film flow rates around a rod and subsequently distributing the varying film flow rates of a rod to arrive at the uniform film flow rate as it has been found that the liquid film has a strong tendency to be uniform around the rod. The FIDOM-Rod code developed for the dryout prediction in BWR assemblies provides detailed solution of the multiple liquid films in a subchannel. The approach of uniform film flow rate around the rod simplifies the liquid film cross flow modeling and was found to provide dryout prediction with a good accuracy when compared with the experimental data of 16, 19 and 37 rod bundles under BWR conditions. The critical power has been predicted for a newly designed 54 rod bundle of the Advanced Heavy Water Reactor (AHWR). The selected constitutive models for the droplet entrainment and deposition rates validated for the dryout in tube were

  19. Accurate, conformation-dependent predictions of solvent effects on protein ionization constants

    PubMed Central

    Barth, P.; Alber, T.; Harbury, P. B.

    2007-01-01

    Predicting how aqueous solvent modulates the conformational transitions and influences the pKa values that regulate the biological functions of biomolecules remains an unsolved challenge. To address this problem, we developed FDPB_MF, a rotamer repacking method that exhaustively samples side chain conformational space and rigorously calculates multibody protein–solvent interactions. FDPB_MF predicts the effects on pKa values of various solvent exposures, large ionic strength variations, strong energetic couplings, structural reorganizations and sequence mutations. The method achieves high accuracy, with root mean square deviations within 0.3 pH unit of the experimental values measured for turkey ovomucoid third domain, hen lysozyme, Bacillus circulans xylanase, and human and Escherichia coli thioredoxins. FDPB_MF provides a faithful, quantitative assessment of electrostatic interactions in biological macromolecules. PMID:17360348

  20. FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues

    PubMed Central

    EL-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant

    2016-01-01

    A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein

  1. FastRNABindR: Fast and Accurate Prediction of Protein-RNA Interface Residues.

    PubMed

    El-Manzalawy, Yasser; Abbas, Mostafa; Malluhi, Qutaibah; Honavar, Vasant

    2016-01-01

    A wide range of biological processes, including regulation of gene expression, protein synthesis, and replication and assembly of many viruses are mediated by RNA-protein interactions. However, experimental determination of the structures of protein-RNA complexes is expensive and technically challenging. Hence, a number of computational tools have been developed for predicting protein-RNA interfaces. Some of the state-of-the-art protein-RNA interface predictors rely on position-specific scoring matrix (PSSM)-based encoding of the protein sequences. The computational efforts needed for generating PSSMs severely limits the practical utility of protein-RNA interface prediction servers. In this work, we experiment with two approaches, random sampling and sequence similarity reduction, for extracting a representative reference database of protein sequences from more than 50 million protein sequences in UniRef100. Our results suggest that random sampled databases produce better PSSM profiles (in terms of the number of hits used to generate the profile and the distance of the generated profile to the corresponding profile generated using the entire UniRef100 data as well as the accuracy of the machine learning classifier trained using these profiles). Based on our results, we developed FastRNABindR, an improved version of RNABindR for predicting protein-RNA interface residues using PSSM profiles generated using 1% of the UniRef100 sequences sampled uniformly at random. To the best of our knowledge, FastRNABindR is the only protein-RNA interface residue prediction online server that requires generation of PSSM profiles for query sequences and accepts hundreds of protein sequences per submission. Our approach for determining the optimal BLAST database for a protein-RNA interface residue classification task has the potential of substantially speeding up, and hence increasing the practical utility of, other amino acid sequence based predictors of protein-protein and protein

  2. Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data

    PubMed Central

    Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.

    2015-01-01

    Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103

  3. Revisiting the blind tests in crystal structure prediction: accurate energy ranking of molecular crystals.

    PubMed

    Asmadi, Aldi; Neumann, Marcus A; Kendrick, John; Girard, Pascale; Perrin, Marc-Antoine; Leusen, Frank J J

    2009-12-24

    In the 2007 blind test of crystal structure prediction hosted by the Cambridge Crystallographic Data Centre (CCDC), a hybrid DFT/MM method correctly ranked each of the four experimental structures as having the lowest lattice energy of all the crystal structures predicted for each molecule. The work presented here further validates this hybrid method by optimizing the crystal structures (experimental and submitted) of the first three CCDC blind tests held in 1999, 2001, and 2004. Except for the crystal structures of compound IX, all structures were reminimized and ranked according to their lattice energies. The hybrid method computes the lattice energy of a crystal structure as the sum of the DFT total energy and a van der Waals (dispersion) energy correction. Considering all four blind tests, the crystal structure with the lowest lattice energy corresponds to the experimentally observed structure for 12 out of 14 molecules. Moreover, good geometrical agreement is observed between the structures determined by the hybrid method and those measured experimentally. In comparison with the correct submissions made by the blind test participants, all hybrid optimized crystal structures (apart from compound II) have the smallest calculated root mean squared deviations from the experimentally observed structures. It is predicted that a new polymorph of compound V exists under pressure.

  4. Fast and accurate numerical method for predicting gas chromatography retention time.

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-08-01

    Predictive modeling for gas chromatography compound retention depends on the retention factor (ki) and on the flow of the mobile phase. Thus, different approaches for determining an analyte ki in column chromatography have been developed. The main one is based on the thermodynamic properties of the component and on the characteristics of the stationary phase. These models can be used to estimate the parameters and to optimize the programming of temperatures, in gas chromatography, for the separation of compounds. Different authors have proposed the use of numerical methods for solving these models, but these methods demand greater computational time. Hence, a new method for solving the predictive modeling of analyte retention time is presented. This algorithm is an alternative to traditional methods because it transforms its attainments into root determination problems within defined intervals. The proposed approach allows for tr calculation, with accuracy determined by the user of the methods, and significant reductions in computational time; it can also be used to evaluate the performance of other prediction methods.

  5. Accurate structure prediction of peptide–MHC complexes for identifying highly immunogenic antigens

    SciTech Connect

    Park, Min-Sun; Park, Sung Yong; Miller, Keith R.; Collins, Edward J.; Lee, Ha Youn

    2013-11-01

    Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide–MHC complex. Here, we present an in silico protocol for predicting peptide–MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformational changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally toward the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide–MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens.

  6. Revisiting the blind tests in crystal structure prediction: accurate energy ranking of molecular crystals.

    PubMed

    Asmadi, Aldi; Neumann, Marcus A; Kendrick, John; Girard, Pascale; Perrin, Marc-Antoine; Leusen, Frank J J

    2009-12-24

    In the 2007 blind test of crystal structure prediction hosted by the Cambridge Crystallographic Data Centre (CCDC), a hybrid DFT/MM method correctly ranked each of the four experimental structures as having the lowest lattice energy of all the crystal structures predicted for each molecule. The work presented here further validates this hybrid method by optimizing the crystal structures (experimental and submitted) of the first three CCDC blind tests held in 1999, 2001, and 2004. Except for the crystal structures of compound IX, all structures were reminimized and ranked according to their lattice energies. The hybrid method computes the lattice energy of a crystal structure as the sum of the DFT total energy and a van der Waals (dispersion) energy correction. Considering all four blind tests, the crystal structure with the lowest lattice energy corresponds to the experimentally observed structure for 12 out of 14 molecules. Moreover, good geometrical agreement is observed between the structures determined by the hybrid method and those measured experimentally. In comparison with the correct submissions made by the blind test participants, all hybrid optimized crystal structures (apart from compound II) have the smallest calculated root mean squared deviations from the experimentally observed structures. It is predicted that a new polymorph of compound V exists under pressure. PMID:19950907

  7. HAAD: A quick algorithm for accurate prediction of hydrogen atoms in protein structures.

    PubMed

    Li, Yunqi; Roy, Ambrish; Zhang, Yang

    2009-08-20

    Hydrogen constitutes nearly half of all atoms in proteins and their positions are essential for analyzing hydrogen-bonding interactions and refining atomic-level structures. However, most protein structures determined by experiments or computer prediction lack hydrogen coordinates. We present a new algorithm, HAAD, to predict the positions of hydrogen atoms based on the positions of heavy atoms. The algorithm is built on the basic rules of orbital hybridization followed by the optimization of steric repulsion and electrostatic interactions. We tested the algorithm using three independent data sets: ultra-high-resolution X-ray structures, structures determined by neutron diffraction, and NOE proton-proton distances. Compared with the widely used programs CHARMM and REDUCE, HAAD has a significantly higher accuracy, with the average RMSD of the predicted hydrogen atoms to the X-ray and neutron diffraction structures decreased by 26% and 11%, respectively. Furthermore, hydrogen atoms placed by HAAD have more matches with the NOE restraints and fewer clashes with heavy atoms. The average CPU cost by HAAD is 18 and 8 times lower than that of CHARMM and REDUCE, respectively. The significant advantage of HAAD in both the accuracy and the speed of the hydrogen additions should make HAAD a useful tool for the detailed study of protein structure and function. Both an executable and the source code of HAAD are freely available at http://zhang.bioinformatics.ku.edu/HAAD.

  8. Accurate single-sequence prediction of solvent accessible surface area using local and global features.

    PubMed

    Faraggi, Eshel; Zhou, Yaoqi; Kloczkowski, Andrzej

    2014-11-01

    We present a new approach for predicting the Accessible Surface Area (ASA) using a General Neural Network (GENN). The novelty of the new approach lies in not using residue mutation profiles generated by multiple sequence alignments as descriptive inputs. Instead we use solely sequential window information and global features such as single-residue and two-residue compositions of the chain. The resulting predictor is both highly more efficient than sequence alignment-based predictors and of comparable accuracy to them. Introduction of the global inputs significantly helps achieve this comparable accuracy. The predictor, termed ASAquick, is tested on predicting the ASA of globular proteins and found to perform similarly well for so-called easy and hard cases indicating generalizability and possible usability for de-novo protein structure prediction. The source code and a Linux executables for GENN and ASAquick are available from Research and Information Systems at http://mamiris.com, from the SPARKS Lab at http://sparks-lab.org, and from the Battelle Center for Mathematical Medicine at http://mathmed.org. PMID:25204636

  9. Accurate prediction of interfacial residues in two-domain proteins using evolutionary information: implications for three-dimensional modeling.

    PubMed

    Bhaskara, Ramachandra M; Padhi, Amrita; Srinivasan, Narayanaswamy

    2014-07-01

    With the preponderance of multidomain proteins in eukaryotic genomes, it is essential to recognize the constituent domains and their functions. Often function involves communications across the domain interfaces, and the knowledge of the interacting sites is essential to our understanding of the structure-function relationship. Using evolutionary information extracted from homologous domains in at least two diverse domain architectures (single and multidomain), we predict the interface residues corresponding to domains from the two-domain proteins. We also use information from the three-dimensional structures of individual domains of two-domain proteins to train naïve Bayes classifier model to predict the interfacial residues. Our predictions are highly accurate (∼85%) and specific (∼95%) to the domain-domain interfaces. This method is specific to multidomain proteins which contain domains in at least more than one protein architectural context. Using predicted residues to constrain domain-domain interaction, rigid-body docking was able to provide us with accurate full-length protein structures with correct orientation of domains. We believe that these results can be of considerable interest toward rational protein and interaction design, apart from providing us with valuable information on the nature of interactions.

  10. Predictive maintenance of critical equipment in industrial processes

    NASA Astrophysics Data System (ADS)

    Hashemian, Hashem M.

    This dissertation is an account of present and past research and development (R&D) efforts conducted by the author to develop and implement new technology for predictive maintenance and equipment condition monitoring in industrial processes. In particular, this dissertation presents the design of an integrated condition-monitoring system that incorporates the results of three current R&D projects with a combined funding of $2.8 million awarded to the author by the U.S. Department of Energy (DOE). This system will improve the state of the art in equipment condition monitoring and has applications in numerous industries including chemical and petrochemical plants, aviation and aerospace, electric power production and distribution, and a variety of manufacturing processes. The work that is presented in this dissertation is unique in that it introduces a new class of condition-monitoring methods that depend predominantly on the normal output of existing process sensors. It also describes current R&D efforts to develop data acquisition systems and data analysis algorithms and software packages that use the output of these sensors to determine the condition and health of industrial processes and their equipment. For example, the output of a pressure sensor in an operating plant can be used not only to indicate the pressure, but also to verify the calibration and response time of the sensor itself and identify anomalies in the process such as blockages, voids, and leaks that can interfere with accurate measurement of process parameters or disturb the plant's operation, safety, or reliability. Today, process data are typically collected at a rate of one sample per second (1 Hz) or slower. If this sampling rate is increased to 100 samples per second or higher, much more information can be extracted from the normal output of a process sensor and then used for condition monitoring, equipment performance measurements, and predictive maintenance. A fast analog-to-digital (A

  11. Comparative motif discovery combined with comparative transcriptomics yields accurate targetome and enhancer predictions.

    PubMed

    Naval-Sánchez, Marina; Potier, Delphine; Haagen, Lotte; Sánchez, Máximo; Munck, Sebastian; Van de Sande, Bram; Casares, Fernando; Christiaens, Valerie; Aerts, Stein

    2013-01-01

    The identification of transcription factor binding sites, enhancers, and transcriptional target genes often relies on the integration of gene expression profiling and computational cis-regulatory sequence analysis. Methods for the prediction of cis-regulatory elements can take advantage of comparative genomics to increase signal-to-noise levels. However, gene expression data are usually derived from only one species. Here we investigate tissue-specific cross-species gene expression profiling by high-throughput sequencing, combined with cross-species motif discovery. First, we compared different methods for expression level quantification and cross-species integration using Tag-seq data. Using the optimal pipeline, we derived a set of genes with conserved expression during retinal determination across Drosophila melanogaster, Drosophila yakuba, and Drosophila virilis. These genes are enriched for binding sites of eye-related transcription factors including the zinc-finger Glass, a master regulator of photoreceptor differentiation. Validation of predicted Glass targets using RNA-seq in homozygous glass mutants confirms that the majority of our predictions are expressed downstream from Glass. Finally, we tested nine candidate enhancers by in vivo reporter assays and found eight of them to drive GFP in the eye disc, of which seven colocalize with the Glass protein, namely, scrt, chp, dpr10, CG6329, retn, Lim3, and dmrt99B. In conclusion, we show for the first time the combined use of cross-species expression profiling with cross-species motif discovery as a method to define a core developmental program, and we augment the candidate Glass targetome from a single known target gene, lozenge, to at least 62 conserved transcriptional targets. PMID:23070853

  12. Accurate and Rigorous Prediction of the Changes in Protein Free Energies in a Large-Scale Mutation Scan.

    PubMed

    Gapsys, Vytautas; Michielssens, Servaas; Seeliger, Daniel; de Groot, Bert L

    2016-06-20

    The prediction of mutation-induced free-energy changes in protein thermostability or protein-protein binding is of particular interest in the fields of protein design, biotechnology, and bioengineering. Herein, we achieve remarkable accuracy in a scan of 762 mutations estimating changes in protein thermostability based on the first principles of statistical mechanics. The remaining error in the free-energy estimates appears to be due to three sources in approximately equal parts, namely sampling, force-field inaccuracies, and experimental uncertainty. We propose a consensus force-field approach, which, together with an increased sampling time, leads to a free-energy prediction accuracy that matches those reached in experiments. This versatile approach enables accurate free-energy estimates for diverse proteins, including the prediction of changes in the melting temperature of the membrane protein neurotensin receptor 1. PMID:27122231

  13. Accurate prediction of cellular co-translational folding indicates proteins can switch from post- to co-translational folding

    NASA Astrophysics Data System (ADS)

    Nissley, Daniel A.; Sharma, Ajeet K.; Ahmed, Nabeel; Friedrich, Ulrike A.; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P.

    2016-02-01

    The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally--a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process.

  14. PSI: A Comprehensive and Integrative Approach for Accurate Plant Subcellular Localization Prediction

    PubMed Central

    Chen, Ming

    2013-01-01

    Predicting the subcellular localization of proteins conquers the major drawbacks of high-throughput localization experiments that are costly and time-consuming. However, current subcellular localization predictors are limited in scope and accuracy. In particular, most predictors perform well on certain locations or with certain data sets while poorly on others. Here, we present PSI, a novel high accuracy web server for plant subcellular localization prediction. PSI derives the wisdom of multiple specialized predictors via a joint-approach of group decision making strategy and machine learning methods to give an integrated best result. The overall accuracy obtained (up to 93.4%) was higher than best individual (CELLO) by ∼10.7%. The precision of each predicable subcellular location (more than 80%) far exceeds that of the individual predictors. It can also deal with multi-localization proteins. PSI is expected to be a powerful tool in protein location engineering as well as in plant sciences, while the strategy employed could be applied to other integrative problems. A user-friendly web server, PSI, has been developed for free access at http://bis.zju.edu.cn/psi/. PMID:24194827

  15. CRYSpred: accurate sequence-based protein crystallization propensity prediction using sequence-derived structural characteristics.

    PubMed

    Mizianty, Marcin J; Kurgan, Lukasz A

    2012-01-01

    Relatively low success rates of X-ray crystallography, which is the most popular method for solving proteins structures, motivate development of novel methods that support selection of tractable protein targets. This aspect is particularly important in the context of the current structural genomics efforts that allow for a certain degree of flexibility in the target selection. We propose CRYSpred, a novel in-silico crystallization propensity predictor that uses a set of 15 novel features which utilize a broad range of inputs including charge, hydrophobicity, and amino acid composition derived from the protein chain, and the solvent accessibility and disorder predicted from the protein sequence. Our method outperforms seven modern crystallization propensity predictors on three, independent from training dataset, benchmark test datasets. The strong predictive performance offered by the CRYSpred is attributed to the careful design of the features, utilization of the comprehensive set of inputs, and the usage of the Support Vector Machine classifier. The inputs utilized by CRYSpred are well-aligned with the existing rules-of-thumb that are used in the structural genomics studies. PMID:21919861

  16. CRYSpred: accurate sequence-based protein crystallization propensity prediction using sequence-derived structural characteristics.

    PubMed

    Mizianty, Marcin J; Kurgan, Lukasz A

    2012-01-01

    Relatively low success rates of X-ray crystallography, which is the most popular method for solving proteins structures, motivate development of novel methods that support selection of tractable protein targets. This aspect is particularly important in the context of the current structural genomics efforts that allow for a certain degree of flexibility in the target selection. We propose CRYSpred, a novel in-silico crystallization propensity predictor that uses a set of 15 novel features which utilize a broad range of inputs including charge, hydrophobicity, and amino acid composition derived from the protein chain, and the solvent accessibility and disorder predicted from the protein sequence. Our method outperforms seven modern crystallization propensity predictors on three, independent from training dataset, benchmark test datasets. The strong predictive performance offered by the CRYSpred is attributed to the careful design of the features, utilization of the comprehensive set of inputs, and the usage of the Support Vector Machine classifier. The inputs utilized by CRYSpred are well-aligned with the existing rules-of-thumb that are used in the structural genomics studies.

  17. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons

    SciTech Connect

    Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.

    2014-01-28

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.

  18. Accurate predictions of dielectrophoretic force and torque on particles with strong mutual field, particle, and wall interactions

    NASA Astrophysics Data System (ADS)

    Liu, Qianlong; Reifsnider, Kenneth

    2012-11-01

    The basis of dielectrophoresis (DEP) is the prediction of the force and torque on particles. The classical approach to the prediction is based on the effective moment method, which, however, is an approximate approach, assumes infinitesimal particles. Therefore, it is well-known that for finite-sized particles, the DEP approximation is inaccurate as the mutual field, particle, wall interactions become strong, a situation presently attracting extensive research for practical significant applications. In the present talk, we provide accurate calculations of the force and torque on the particles from first principles, by directly resolving the local geometry and properties and accurately accounting for the mutual interactions for finite-sized particles with both dielectric polarization and conduction in a sinusoidally steady-state electric field. Since the approach has a significant advantage, compared to other numerical methods, to efficiently simulate many closely packed particles, it provides an important, unique, and accurate technique to investigate complex DEP phenomena, for example heterogeneous mixtures containing particle chains, nanoparticle assembly, biological cells, non-spherical effects, etc. This study was supported by the Department of Energy under funding for an EFRC (the HeteroFoaM Center), grant no. DE-SC0001061.

  19. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons

    NASA Astrophysics Data System (ADS)

    Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.

    2014-01-01

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.

  20. The Compensatory Reserve For Early and Accurate Prediction Of Hemodynamic Compromise: A Review of the Underlying Physiology.

    PubMed

    Convertino, Victor A; Wirt, Michael D; Glenn, John F; Lein, Brian C

    2016-06-01

    Shock is deadly and unpredictable if it is not recognized and treated in early stages of hemorrhage. Unfortunately, measurements of standard vital signs that are displayed on current medical monitors fail to provide accurate or early indicators of shock because of physiological mechanisms that effectively compensate for blood loss. As a result of new insights provided by the latest research on the physiology of shock using human experimental models of controlled hemorrhage, it is now recognized that measurement of the body's reserve to compensate for reduced circulating blood volume is the single most important indicator for early and accurate assessment of shock. We have called this function the "compensatory reserve," which can be accurately assessed by real-time measurements of changes in the features of the arterial waveform. In this paper, the physiology underlying the development and evaluation of a new noninvasive technology that allows for real-time measurement of the compensatory reserve will be reviewed, with its clinical implications for earlier and more accurate prediction of shock. PMID:26950588

  1. A novel method to predict visual field progression more accurately, using intraocular pressure measurements in glaucoma patients

    PubMed Central

    Asaoka, Ryo; Fujino, Yuri; Murata, Hiroshi; Miki, Atsuya; Tanito, Masaki; Mizoue, Shiro; Mori, Kazuhiko; Suzuki, Katsuyoshi; Yamashita, Takehiro; Kashiwagi, Kenji; Shoji, Nobuyuki

    2016-01-01

    Visual field (VF) data were retrospectively obtained from 491 eyes in 317 patients with open angle glaucoma who had undergone ten VF tests (Humphrey Field Analyzer, 24-2, SITA standard). First, mean of total deviation values (mTD) in the tenth VF was predicted using standard linear regression of the first five VFs (VF1-5) through to using all nine preceding VFs (VF1-9). Then an ‘intraocular pressure (IOP)-integrated VF trend analysis’ was carried out by simply using time multiplied by IOP as the independent term in the linear regression model. Prediction errors (absolute prediction error or root mean squared error: RMSE) for predicting mTD and also point wise TD values of the tenth VF were obtained from both approaches. The mTD absolute prediction errors associated with the IOP-integrated VF trend analysis were significantly smaller than those from the standard trend analysis when VF1-6 through to VF1-8 were used (p < 0.05). The point wise RMSEs from the IOP-integrated trend analysis were significantly smaller than those from the standard trend analysis when VF1-5 through to VF1-9 were used (p < 0.05). This was especially the case when IOP was measured more frequently. Thus a significantly more accurate prediction of VF progression is possible using a simple trend analysis that incorporates IOP measurements. PMID:27562553

  2. A novel method to predict visual field progression more accurately, using intraocular pressure measurements in glaucoma patients.

    PubMed

    2016-01-01

    Visual field (VF) data were retrospectively obtained from 491 eyes in 317 patients with open angle glaucoma who had undergone ten VF tests (Humphrey Field Analyzer, 24-2, SITA standard). First, mean of total deviation values (mTD) in the tenth VF was predicted using standard linear regression of the first five VFs (VF1-5) through to using all nine preceding VFs (VF1-9). Then an 'intraocular pressure (IOP)-integrated VF trend analysis' was carried out by simply using time multiplied by IOP as the independent term in the linear regression model. Prediction errors (absolute prediction error or root mean squared error: RMSE) for predicting mTD and also point wise TD values of the tenth VF were obtained from both approaches. The mTD absolute prediction errors associated with the IOP-integrated VF trend analysis were significantly smaller than those from the standard trend analysis when VF1-6 through to VF1-8 were used (p < 0.05). The point wise RMSEs from the IOP-integrated trend analysis were significantly smaller than those from the standard trend analysis when VF1-5 through to VF1-9 were used (p < 0.05). This was especially the case when IOP was measured more frequently. Thus a significantly more accurate prediction of VF progression is possible using a simple trend analysis that incorporates IOP measurements. PMID:27562553

  3. Combining multiple regression and principal component analysis for accurate predictions for column ozone in Peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Rajab, Jasim M.; MatJafri, M. Z.; Lim, H. S.

    2013-06-01

    This study encompasses columnar ozone modelling in the peninsular Malaysia. Data of eight atmospheric parameters [air surface temperature (AST), carbon monoxide (CO), methane (CH4), water vapour (H2Ovapour), skin surface temperature (SSKT), atmosphere temperature (AT), relative humidity (RH), and mean surface pressure (MSP)] data set, retrieved from NASA's Atmospheric Infrared Sounder (AIRS), for the entire period (2003-2008) was employed to develop models to predict the value of columnar ozone (O3) in study area. The combined method, which is based on using both multiple regressions combined with principal component analysis (PCA) modelling, was used to predict columnar ozone. This combined approach was utilized to improve the prediction accuracy of columnar ozone. Separate analysis was carried out for north east monsoon (NEM) and south west monsoon (SWM) seasons. The O3 was negatively correlated with CH4, H2Ovapour, RH, and MSP, whereas it was positively correlated with CO, AST, SSKT, and AT during both the NEM and SWM season periods. Multiple regression analysis was used to fit the columnar ozone data using the atmospheric parameter's variables as predictors. A variable selection method based on high loading of varimax rotated principal components was used to acquire subsets of the predictor variables to be comprised in the linear regression model of the atmospheric parameter's variables. It was found that the increase in columnar O3 value is associated with an increase in the values of AST, SSKT, AT, and CO and with a drop in the levels of CH4, H2Ovapour, RH, and MSP. The result of fitting the best models for the columnar O3 value using eight of the independent variables gave about the same values of the R (≈0.93) and R2 (≈0.86) for both the NEM and SWM seasons. The common variables that appeared in both regression equations were SSKT, CH4 and RH, and the principal precursor of the columnar O3 value in both the NEM and SWM seasons was SSKT.

  4. Prognostic breast cancer signature identified from 3D culture model accurately predicts clinical outcome across independent datasets

    SciTech Connect

    Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.

    2008-10-20

    One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic

  5. Simplified versus geometrically accurate models of forefoot anatomy to predict plantar pressures: A finite element study.

    PubMed

    Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R

    2016-01-25

    Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required.

  6. Four-protein signature accurately predicts lymph node metastasis and survival in oral squamous cell carcinoma.

    PubMed

    Zanaruddin, Sharifah Nurain Syed; Saleh, Amyza; Yang, Yi-Hsin; Hamid, Sharifah; Mustafa, Wan Mahadzir Wan; Khairul Bariah, A A N; Zain, Rosnah Binti; Lau, Shin Hin; Cheong, Sok Ching

    2013-03-01

    The presence of lymph node (LN) metastasis significantly affects the survival of patients with oral squamous cell carcinoma (OSCC). Successful detection and removal of positive LNs are crucial in the treatment of this disease. Current evaluation methods still have their limitations in detecting the presence of tumor cells in the LNs, where up to a third of clinically diagnosed metastasis-negative (N0) patients actually have metastasis-positive LNs in the neck. We developed a molecular signature in the primary tumor that could predict LN metastasis in OSCC. A total of 211 cores from 55 individuals were included in the study. Eleven proteins were evaluated using immunohistochemical analysis in a tissue microarray. Of the 11 biomarkers evaluated using receiver operating curve analysis, epidermal growth factor receptor (EGFR), v-erb-b2 erythroblastic leukemia viral oncogene homolog 2 (HER-2/neu), laminin, gamma 2 (LAMC2), and ras homolog family member C (RHOC) were found to be significantly associated with the presence of LN metastasis. Unsupervised hierarchical clustering-demonstrated expression patterns of these 4 proteins could be used to differentiate specimens that have positive LN metastasis from those that are negative for LN metastasis. Collectively, EGFR, HER-2/neu, LAMC2, and RHOC have a specificity of 87.5% and a sensitivity of 70%, with a prognostic accuracy of 83.4% for LN metastasis. We also demonstrated that the LN signature could independently predict disease-specific survival (P = .036). The 4-protein LN signature validated in an independent set of samples strongly suggests that it could reliably distinguish patients with LN metastasis from those who were metastasis-free and therefore could be a prognostic tool for the management of patients with OSCC.

  7. Four-protein signature accurately predicts lymph node metastasis and survival in oral squamous cell carcinoma.

    PubMed

    Zanaruddin, Sharifah Nurain Syed; Saleh, Amyza; Yang, Yi-Hsin; Hamid, Sharifah; Mustafa, Wan Mahadzir Wan; Khairul Bariah, A A N; Zain, Rosnah Binti; Lau, Shin Hin; Cheong, Sok Ching

    2013-03-01

    The presence of lymph node (LN) metastasis significantly affects the survival of patients with oral squamous cell carcinoma (OSCC). Successful detection and removal of positive LNs are crucial in the treatment of this disease. Current evaluation methods still have their limitations in detecting the presence of tumor cells in the LNs, where up to a third of clinically diagnosed metastasis-negative (N0) patients actually have metastasis-positive LNs in the neck. We developed a molecular signature in the primary tumor that could predict LN metastasis in OSCC. A total of 211 cores from 55 individuals were included in the study. Eleven proteins were evaluated using immunohistochemical analysis in a tissue microarray. Of the 11 biomarkers evaluated using receiver operating curve analysis, epidermal growth factor receptor (EGFR), v-erb-b2 erythroblastic leukemia viral oncogene homolog 2 (HER-2/neu), laminin, gamma 2 (LAMC2), and ras homolog family member C (RHOC) were found to be significantly associated with the presence of LN metastasis. Unsupervised hierarchical clustering-demonstrated expression patterns of these 4 proteins could be used to differentiate specimens that have positive LN metastasis from those that are negative for LN metastasis. Collectively, EGFR, HER-2/neu, LAMC2, and RHOC have a specificity of 87.5% and a sensitivity of 70%, with a prognostic accuracy of 83.4% for LN metastasis. We also demonstrated that the LN signature could independently predict disease-specific survival (P = .036). The 4-protein LN signature validated in an independent set of samples strongly suggests that it could reliably distinguish patients with LN metastasis from those who were metastasis-free and therefore could be a prognostic tool for the management of patients with OSCC. PMID:23026198

  8. Nonempirically Tuned Range-Separated DFT Accurately Predicts Both Fundamental and Excitation Gaps in DNA and RNA Nucleobases

    PubMed Central

    2012-01-01

    Using a nonempirically tuned range-separated DFT approach, we study both the quasiparticle properties (HOMO–LUMO fundamental gaps) and excitation energies of DNA and RNA nucleobases (adenine, thymine, cytosine, guanine, and uracil). Our calculations demonstrate that a physically motivated, first-principles tuned DFT approach accurately reproduces results from both experimental benchmarks and more computationally intensive techniques such as many-body GW theory. Furthermore, in the same set of nucleobases, we show that the nonempirical range-separated procedure also leads to significantly improved results for excitation energies compared to conventional DFT methods. The present results emphasize the importance of a nonempirically tuned range-separation approach for accurately predicting both fundamental and excitation gaps in DNA and RNA nucleobases. PMID:22904693

  9. Lateral impact validation of a geometrically accurate full body finite element model for blunt injury prediction.

    PubMed

    Vavalle, Nicholas A; Moreno, Daniel P; Rhyne, Ashley C; Stitzel, Joel D; Gayzik, F Scott

    2013-03-01

    This study presents four validation cases of a mid-sized male (M50) full human body finite element model-two lateral sled tests at 6.7 m/s, one sled test at 8.9 m/s, and a lateral drop test. Model results were compared to transient force curves, peak force, chest compression, and number of fractures from the studies. For one of the 6.7 m/s impacts (flat wall impact), the peak thoracic, abdominal and pelvic loads were 8.7, 3.1 and 14.9 kN for the model and 5.2 ± 1.1 kN, 3.1 ± 1.1 kN, and 6.3 ± 2.3 kN for the tests. For the same test setup in the 8.9 m/s case, they were 12.6, 6, and 21.9 kN for the model and 9.1 ± 1.5 kN, 4.9 ± 1.1 kN, and 17.4 ± 6.8 kN for the experiments. The combined torso load and the pelvis load simulated in a second rigid wall impact at 6.7 m/s were 11.4 and 15.6 kN, respectively, compared to 8.5 ± 0.2 kN and 8.3 ± 1.8 kN experimentally. The peak thorax load in the drop test was 6.7 kN for the model, within the range in the cadavers, 5.8-7.4 kN. When analyzing rib fractures, the model predicted Abbreviated Injury Scale scores within the reported range in three of four cases. Objective comparison methods were used to quantitatively compare the model results to the literature studies. The results show a good match in the thorax and abdomen regions while the pelvis results over predicted the reaction loads from the literature studies. These results are an important milestone in the development and validation of this globally developed average male FEA model in lateral impact.

  10. Accurate prediction of the refractive index of polymers using first principles and data modeling

    NASA Astrophysics Data System (ADS)

    Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes

    Organic polymers with a high refractive index (RI) have recently attracted considerable interest due to their potential application in optical and optoelectronic devices. The ability to tailor the molecular structure of polymers is the key to increasing the accessible RI values. Our work concerns the creation of predictive in silico models for the optical properties of organic polymers, the screening of large-scale candidate libraries, and the mining of the resulting data to extract the underlying design principles that govern their performance. This work was set up to guide our experimentalist partners and allow them to target the most promising candidates. Our model is based on the Lorentz-Lorenz equation and thus includes the polarizability and number density values for each candidate. For the former, we performed a detailed benchmark study of different density functionals, basis sets, and the extrapolation scheme towards the polymer limit. For the number density we devised an exceedingly efficient machine learning approach to correlate the polymer structure and the packing fraction in the bulk material. We validated the proposed RI model against the experimentally known RI values of 112 polymers. We could show that the proposed combination of physical and data modeling is both successful and highly economical to characterize a wide range of organic polymers, which is a prerequisite for virtual high-throughput screening.

  11. Accurate predictions of C-SO2R bond dissociation enthalpies using density functional theory methods.

    PubMed

    Yu, Hai-Zhu; Fu, Fang; Zhang, Liang; Fu, Yao; Dang, Zhi-Min; Shi, Jing

    2014-10-14

    The dissociation of the C-SO2R bond is frequently involved in organic and bio-organic reactions, and the C-SO2R bond dissociation enthalpies (BDEs) are potentially important for understanding the related mechanisms. The primary goal of the present study is to provide a reliable calculation method to predict the different C-SO2R bond dissociation enthalpies (BDEs). Comparing the accuracies of 13 different density functional theory (DFT) methods (such as B3LYP, TPSS, and M05 etc.), and different basis sets (such as 6-31G(d) and 6-311++G(2df,2p)), we found that M06-2X/6-31G(d) gives the best performance in reproducing the various C-S BDEs (and especially the C-SO2R BDEs). As an example for understanding the mechanisms with the aid of C-SO2R BDEs, some primary mechanistic studies were carried out on the chemoselective coupling (in the presence of a Cu-catalyst) or desulfinative coupling reactions (in the presence of a Pd-catalyst) between sulfinic acid salts and boryl/sulfinic acid salts.

  12. Towards Accurate Prediction of Turbulent, Three-Dimensional, Recirculating Flows with the NCC

    NASA Technical Reports Server (NTRS)

    Iannetti, A.; Tacina, R.; Jeng, S.-M.; Cai, J.

    2001-01-01

    The National Combustion Code (NCC) was used to calculate the steady state, nonreacting flow field of a prototype Lean Direct Injection (LDI) swirler. This configuration used nine groups of eight holes drilled at a thirty-five degree angle to induce swirl. These nine groups created swirl in the same direction, or a corotating pattern. The static pressure drop across the holes was fixed at approximately four percent. Computations were performed on one quarter of the geometry, because the geometry is considered rotationally periodic every ninety degrees. The final computational grid used was approximately 2.26 million tetrahedral cells, and a cubic nonlinear k - epsilon model was used to model turbulence. The NCC results were then compared to time averaged Laser Doppler Velocimetry (LDV) data. The LDV measurements were performed on the full geometry, but four ninths of the geometry was measured. One-, two-, and three-dimensional representations of both flow fields are presented. The NCC computations compare both qualitatively and quantitatively well to the LDV data, but differences exist downstream. The comparison is encouraging, and shows that NCC can be used for future injector design studies. To improve the flow prediction accuracy of turbulent, three-dimensional, recirculating flow fields with the NCC, recommendations are given.

  13. An improved method for accurate prediction of mass flows through combustor liner holes

    SciTech Connect

    Adkins, R.C.; Gueroui, D.

    1986-01-01

    The objective of this paper is to present a simple approach to the solution of flow through combustor liner holes which can be used by practicing combustor engineers as well as providing the specialist modeler with a convenient boundary condition. For modeling, suppose that all relevant details of the incoming jets can be readily predicted, then the computational boundary can be limited to the inner wall of the liner and to the jets themselves. The scope of this paper is limited to the derivation of a simple analysis, the development of a reliable test technique, and to the correlation of data for plane holes having a diameter which is large when compared to the liner wall thickness. The effect of internal liner flow on the performance of the holes is neglected; this is considered to be justifiable because the analysis terminates at a short distance downstream of the hole and the significantly lower velocities inside the combustor have had little opportunity to have taken any effect. It is intended to extend the procedure to more complex hole forms and flow configurations in later papers.

  14. Neural network approach to quantum-chemistry data: Accurate prediction of density functional theory energies

    NASA Astrophysics Data System (ADS)

    Balabin, Roman M.; Lomakina, Ekaterina I.

    2009-08-01

    Artificial neural network (ANN) approach has been applied to estimate the density functional theory (DFT) energy with large basis set using lower-level energy values and molecular descriptors. A total of 208 different molecules were used for the ANN training, cross validation, and testing by applying BLYP, B3LYP, and BMK density functionals. Hartree-Fock results were reported for comparison. Furthermore, constitutional molecular descriptor (CD) and quantum-chemical molecular descriptor (QD) were used for building the calibration model. The neural network structure optimization, leading to four to five hidden neurons, was also carried out. The usage of several low-level energy values was found to greatly reduce the prediction error. An expected error, mean absolute deviation, for ANN approximation to DFT energies was 0.6±0.2 kcal mol-1. In addition, the comparison of the different density functionals with the basis sets and the comparison of multiple linear regression results were also provided. The CDs were found to overcome limitation of the QD. Furthermore, the effective ANN model for DFT/6-311G(3df,3pd) and DFT/6-311G(2df,2pd) energy estimation was developed, and the benchmark results were provided.

  15. Line Shape Parameters for CO_2 Transitions: Accurate Predictions from Complex Robert-Bonamy Calculations

    NASA Astrophysics Data System (ADS)

    Lamouroux, Julien; Gamache, Robert R.

    2013-06-01

    A model for the prediction of the vibrational dependence of CO_2 half-widths and line shifts for several broadeners, based on a modification of the model proposed by Gamache and Hartmann, is presented. This model allows the half-widths and line shifts for a ro-vibrational transition to be expressed in terms of the number of vibrational quanta exchanged in the transition raised to a power p and a reference ro-vibrational transition. Complex Robert-Bonamy calculations were made for 24 bands for lower rotational quantum numbers J'' from 0 to 160 for N_2-, O_2-, air-, and self-collisions with CO_2. In the model a Quantum Coordinate is defined by (c_1 Δν_1 + c_2 Δν_2 + c_3 Δν_3)^p where a linear least-squares fit to the data by the model expression is made. The model allows the determination of the slope and intercept as a function of rotational transition, broadening gas, and temperature. From these fit data, the half-width, line shift, and the temperature dependence of the half-width can be estimated for any ro-vibrational transition, allowing spectroscopic CO_2 databases to have complete information for the line shape parameters. R. R. Gamache, J.-M. Hartmann, J. Quant. Spectrosc. Radiat. Transfer. {{83}} (2004), 119. R. R. Gamache, J. Lamouroux, J. Quant. Spectrosc. Radiat. Transfer. {{117}} (2013), 93.

  16. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    PubMed Central

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-01-01

    attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries. PMID:26520735

  17. The development and verification of a highly accurate collision prediction model for automated noncoplanar plan delivery

    SciTech Connect

    Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke

    2015-11-15

    attributed to phantom setup errors due to the slightly deformable and flexible phantom extremities. The estimated site-specific safety buffer distance with 0.001% probability of collision for (gantry-to-couch, gantry-to-phantom) was (1.23 cm, 3.35 cm), (1.01 cm, 3.99 cm), and (2.19 cm, 5.73 cm) for treatment to the head, lung, and prostate, respectively. Automated delivery to all three treatment sites was completed in 15 min and collision free using a digital Linac. Conclusions: An individualized collision prediction model for the purpose of noncoplanar beam delivery was developed and verified. With the model, the study has demonstrated the feasibility of predicting deliverable beams for an individual patient and then guiding fully automated noncoplanar treatment delivery. This work motivates development of clinical workflows and quality assurance procedures to allow more extensive use and automation of noncoplanar beam geometries.

  18. How Accurate Are the Anthropometry Equations in in Iranian Military Men in Predicting Body Composition?

    PubMed Central

    Shakibaee, Abolfazl; Faghihzadeh, Soghrat; Alishiri, Gholam Hossein; Ebrahimpour, Zeynab; Faradjzadeh, Shahram; Sobhani, Vahid; Asgari, Alireza

    2015-01-01

    Background: The body composition varies according to different life styles (i.e. intake calories and caloric expenditure). Therefore, it is wise to record military personnel’s body composition periodically and encourage those who abide to the regulations. Different methods have been introduced for body composition assessment: invasive and non-invasive. Amongst them, the Jackson and Pollock equation is most popular. Objectives: The recommended anthropometric prediction equations for assessing men’s body composition were compared with dual-energy X-ray absorptiometry (DEXA) gold standard to develop a modified equation to assess body composition and obesity quantitatively among Iranian military men. Patients and Methods: A total of 101 military men aged 23 - 52 years old with a mean age of 35.5 years were recruited and evaluated in the present study (average height, 173.9 cm and weight, 81.5 kg). The body-fat percentages of subjects were assessed both with anthropometric assessment and DEXA scan. The data obtained from these two methods were then compared using multiple regression analysis. Results: The mean and standard deviation of body fat percentage of the DEXA assessment was 21.2 ± 4.3 and body fat percentage obtained from three Jackson and Pollock 3-, 4- and 7-site equations were 21.1 ± 5.8, 22.2 ± 6.0 and 20.9 ± 5.7, respectively. There was a strong correlation between these three equations and DEXA (R² = 0.98). Conclusions: The mean percentage of body fat obtained from the three equations of Jackson and Pollock was very close to that of body fat obtained from DEXA; however, we suggest using a modified Jackson-Pollock 3-site equation for volunteer military men because the 3-site equation analysis method is simpler and faster than other methods. PMID:26715964

  19. Industrial Compositional Streamline Simulation for Efficient and Accurate Prediction of Gas Injection and WAG Processes

    SciTech Connect

    Margot Gerritsen

    2008-10-31

    Gas-injection processes are widely and increasingly used for enhanced oil recovery (EOR). In the United States, for example, EOR production by gas injection accounts for approximately 45% of total EOR production and has tripled since 1986. The understanding of the multiphase, multicomponent flow taking place in any displacement process is essential for successful design of gas-injection projects. Due to complex reservoir geometry, reservoir fluid properties and phase behavior, the design of accurate and efficient numerical simulations for the multiphase, multicomponent flow governing these processes is nontrivial. In this work, we developed, implemented and tested a streamline based solver for gas injection processes that is computationally very attractive: as compared to traditional Eulerian solvers in use by industry it computes solutions with a computational speed orders of magnitude higher and a comparable accuracy provided that cross-flow effects do not dominate. We contributed to the development of compositional streamline solvers in three significant ways: improvement of the overall framework allowing improved streamline coverage and partial streamline tracing, amongst others; parallelization of the streamline code, which significantly improves wall clock time; and development of new compositional solvers that can be implemented along streamlines as well as in existing Eulerian codes used by industry. We designed several novel ideas in the streamline framework. First, we developed an adaptive streamline coverage algorithm. Adding streamlines locally can reduce computational costs by concentrating computational efforts where needed, and reduce mapping errors. Adapting streamline coverage effectively controls mass balance errors that mostly result from the mapping from streamlines to pressure grid. We also introduced the concept of partial streamlines: streamlines that do not necessarily start and/or end at wells. This allows more efficient coverage and avoids

  20. An Accurate Method for Prediction of Protein-Ligand Binding Site on Protein Surface Using SVM and Statistical Depth Function

    PubMed Central

    Wang, Kui; Gao, Jianzhao; Shen, Shiyi; Tuszynski, Jack A.; Ruan, Jishou

    2013-01-01

    Since proteins carry out their functions through interactions with other molecules, accurately identifying the protein-ligand binding site plays an important role in protein functional annotation and rational drug discovery. In the past two decades, a lot of algorithms were present to predict the protein-ligand binding site. In this paper, we introduce statistical depth function to define negative samples and propose an SVM-based method which integrates sequence and structural information to predict binding site. The results show that the present method performs better than the existent ones. The accuracy, sensitivity, and specificity on training set are 77.55%, 56.15%, and 87.96%, respectively; on the independent test set, the accuracy, sensitivity, and specificity are 80.36%, 53.53%, and 92.38%, respectively. PMID:24195070

  1. Deformation, Failure, and Fatigue Life of SiC/Ti-15-3 Laminates Accurately Predicted by MAC/GMC

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.

    2002-01-01

    NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) (ref.1) has been extended to enable fully coupled macro-micro deformation, failure, and fatigue life predictions for advanced metal matrix, ceramic matrix, and polymer matrix composites. Because of the multiaxial nature of the code's underlying micromechanics model, GMC--which allows the incorporation of complex local inelastic constitutive models--MAC/GMC finds its most important application in metal matrix composites, like the SiC/Ti-15-3 composite examined here. Furthermore, since GMC predicts the microscale fields within each constituent of the composite material, submodels for local effects such as fiber breakage, interfacial debonding, and matrix fatigue damage can and have been built into MAC/GMC. The present application of MAC/GMC highlights the combination of these features, which has enabled the accurate modeling of the deformation, failure, and life of titanium matrix composites.

  2. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-01-01

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning. PMID:25754125

  3. Critical evidence for the prediction error theory in associative learning

    PubMed Central

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-01-01

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an “auto-blocking”, which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning. PMID:25754125

  4. Critical evidence for the prediction error theory in associative learning.

    PubMed

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  5. A cross-race effect in metamemory: Predictions of face recognition are more accurate for members of our own race.

    PubMed

    Hourihan, Kathleen L; Benjamin, Aaron S; Liu, Xiping

    2012-09-01

    The Cross-Race Effect (CRE) in face recognition is the well-replicated finding that people are better at recognizing faces from their own race, relative to other races. The CRE reveals systematic limitations on eyewitness identification accuracy and suggests that some caution is warranted in evaluating cross-race identification. The CRE is a problem because jurors value eyewitness identification highly in verdict decisions. In the present paper, we explore how accurate people are in predicting their ability to recognize own-race and other-race faces. Caucasian and Asian participants viewed photographs of Caucasian and Asian faces, and made immediate judgments of learning during study. An old/new recognition test replicated the CRE: both groups displayed superior discriminability of own-race faces, relative to other-race faces. Importantly, relative metamnemonic accuracy was also greater for own-race faces, indicating that the accuracy of predictions about face recognition is influenced by race. This result indicates another source of concern when eliciting or evaluating eyewitness identification: people are less accurate in judging whether they will or will not recognize a face when that face is of a different race than they are. This new result suggests that a witness's claim of being likely to recognize a suspect from a lineup should be interpreted with caution when the suspect is of a different race than the witness.

  6. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed.

  7. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. PMID:26121186

  8. Why don't we learn to accurately forecast feelings? How misremembering our predictions blinds us to past forecasting errors.

    PubMed

    Meyvis, Tom; Ratner, Rebecca K; Levav, Jonathan

    2010-11-01

    Why do affective forecasting errors persist in the face of repeated disconfirming evidence? Five studies demonstrate that people misremember their forecasts as consistent with their experience and thus fail to perceive the extent of their forecasting error. As a result, people do not learn from past forecasting errors and fail to adjust subsequent forecasts. In the context of a Super Bowl loss (Study 1), a presidential election (Studies 2 and 3), an important purchase (Study 4), and the consumption of candies (Study 5), individuals mispredicted their affective reactions to these experiences and subsequently misremembered their predictions as more accurate than they actually had been. The findings indicate that this recall error results from people's tendency to anchor on their current affective state when trying to recall their affective forecasts. Further, those who showed larger recall errors were less likely to learn to adjust their subsequent forecasts and reminding people of their actual forecasts enhanced learning. These results suggest that a failure to accurately recall one's past predictions contributes to the perpetuation of forecasting errors.

  9. Predicting extreme avalanches in self-organized critical sandpiles.

    PubMed

    Garber, Anja; Hallerberg, Sarah; Kantz, Holger

    2009-08-01

    In a finite-size Abelian sandpile model, extreme avalanches are repelling each other. Taking a time series of the avalanche size and using a decision variable derived from that, we predict the occurrence of a particularly large avalanche in the next time step. The larger the magnitude of these target avalanches, the better is their predictability. The predictability which is based on a finite-size effect, is discussed as a function of the system size.

  10. Predictive information processing in music cognition. A critical review.

    PubMed

    Rohrmeier, Martin A; Koelsch, Stefan

    2012-02-01

    Expectation and prediction constitute central mechanisms in the perception and cognition of music, which have been explored in theoretical and empirical accounts. We review the scope and limits of theoretical accounts of musical prediction with respect to feature-based and temporal prediction. While the concept of prediction is unproblematic for basic single-stream features such as melody, it is not straight-forward for polyphonic structures or higher-order features such as formal predictions. Behavioural results based on explicit and implicit (priming) paradigms provide evidence of priming in various domains that may reflect predictive behaviour. Computational learning models, including symbolic (fragment-based), probabilistic/graphical, or connectionist approaches, provide well-specified predictive models of specific features and feature combinations. While models match some experimental results, full-fledged music prediction cannot yet be modelled. Neuroscientific results regarding the early right-anterior negativity (ERAN) and mismatch negativity (MMN) reflect expectancy violations on different levels of processing complexity, and provide some neural evidence for different predictive mechanisms. At present, the combinations of neural and computational modelling methodologies are at early stages and require further research. PMID:22245599

  11. Analysis and Prediction of the Critical Regions of Antimicrobial Peptides Based on Conditional Random Fields

    PubMed Central

    Chang, Kuan Y.; Lin, Tung-pei; Shih, Ling-Yi; Wang, Chien-Kuo

    2015-01-01

    Antimicrobial peptides (AMPs) are potent drug candidates against microbes such as bacteria, fungi, parasites, and viruses. The size of AMPs ranges from less than ten to hundreds of amino acids. Often only a few amino acids or the critical regions of antimicrobial proteins matter the functionality. Accurately predicting the AMP critical regions could benefit the experimental designs. However, no extensive analyses have been done specifically on the AMP critical regions and computational modeling on them is either non-existent or settled to other problems. With a focus on the AMP critical regions, we thus develop a computational model AMPcore by introducing a state-of-the-art machine learning method, conditional random fields. We generate a comprehensive dataset of 798 AMPs cores and a low similarity dataset of 510 representative AMP cores. AMPcore could reach a maximal accuracy of 90% and 0.79 Matthew’s correlation coefficient on the comprehensive dataset and a maximal accuracy of 83% and 0.66 MCC on the low similarity dataset. Our analyses of AMP cores follow what we know about AMPs: High in glycine and lysine, but low in aspartic acid, glutamic acid, and methionine; the abundance of α-helical structures; the dominance of positive net charges; the peculiarity of amphipathicity. Two amphipathic sequence motifs within the AMP cores, an amphipathic α-helix and an amphipathic π-helix, are revealed. In addition, a short sequence motif at the N-terminal boundary of AMP cores is reported for the first time: arginine at the P(-1) coupling with glycine at the P1 of AMP cores occurs the most, which might link to microbial cell adhesion. PMID:25803302

  12. Accurate prediction of cellular co-translational folding indicates proteins can switch from post- to co-translational folding

    PubMed Central

    Nissley, Daniel A.; Sharma, Ajeet K.; Ahmed, Nabeel; Friedrich, Ulrike A.; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P.

    2016-01-01

    The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally—a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process. PMID:26887592

  13. Accurate prediction of cellular co-translational folding indicates proteins can switch from post- to co-translational folding.

    PubMed

    Nissley, Daniel A; Sharma, Ajeet K; Ahmed, Nabeel; Friedrich, Ulrike A; Kramer, Günter; Bukau, Bernd; O'Brien, Edward P

    2016-01-01

    The rates at which domains fold and codons are translated are important factors in determining whether a nascent protein will co-translationally fold and function or misfold and malfunction. Here we develop a chemical kinetic model that calculates a protein domain's co-translational folding curve during synthesis using only the domain's bulk folding and unfolding rates and codon translation rates. We show that this model accurately predicts the course of co-translational folding measured in vivo for four different protein molecules. We then make predictions for a number of different proteins in yeast and find that synonymous codon substitutions, which change translation-elongation rates, can switch some protein domains from folding post-translationally to folding co-translationally--a result consistent with previous experimental studies. Our approach explains essential features of co-translational folding curves and predicts how varying the translation rate at different codon positions along a transcript's coding sequence affects this self-assembly process. PMID:26887592

  14. A simple yet accurate correction for winner's curse can predict signals discovered in much larger genome scans

    PubMed Central

    Bigdeli, T. Bernard; Lee, Donghyung; Webb, Bradley Todd; Riley, Brien P.; Vladimirov, Vladimir I.; Fanous, Ayman H.; Kendler, Kenneth S.; Bacanu, Silviu-Alin

    2016-01-01

    Motivation: For genetic studies, statistically significant variants explain far less trait variance than ‘sub-threshold’ association signals. To dimension follow-up studies, researchers need to accurately estimate ‘true’ effect sizes at each SNP, e.g. the true mean of odds ratios (ORs)/regression coefficients (RRs) or Z-score noncentralities. Naïve estimates of effect sizes incur winner’s curse biases, which are reduced only by laborious winner’s curse adjustments (WCAs). Given that Z-scores estimates can be theoretically translated on other scales, we propose a simple method to compute WCA for Z-scores, i.e. their true means/noncentralities. Results:WCA of Z-scores shrinks these towards zero while, on P-value scale, multiple testing adjustment (MTA) shrinks P-values toward one, which corresponds to the zero Z-score value. Thus, WCA on Z-scores scale is a proxy for MTA on P-value scale. Therefore, to estimate Z-score noncentralities for all SNPs in genome scans, we propose FDR Inverse Quantile Transformation (FIQT). It (i) performs the simpler MTA of P-values using FDR and (ii) obtains noncentralities by back-transforming MTA P-values on Z-score scale. When compared to competitors, realistic simulations suggest that FIQT is more (i) accurate and (ii) computationally efficient by orders of magnitude. Practical application of FIQT to Psychiatric Genetic Consortium schizophrenia cohort predicts a non-trivial fraction of sub-threshold signals which become significant in much larger supersamples. Conclusions: FIQT is a simple, yet accurate, WCA method for Z-scores (and ORs/RRs, via simple transformations). Availability and Implementation: A 10 lines R function implementation is available at https://github.com/bacanusa/FIQT. Contact: sabacanu@vcu.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27187203

  15. Small-scale field experiments accurately scale up to predict density dependence in reef fish populations at large scales.

    PubMed

    Steele, Mark A; Forrester, Graham E

    2005-09-20

    Field experiments provide rigorous tests of ecological hypotheses but are usually limited to small spatial scales. It is thus unclear whether these findings extrapolate to larger scales relevant to conservation and management. We show that the results of experiments detecting density-dependent mortality of reef fish on small habitat patches scale up to have similar effects on much larger entire reefs that are the size of small marine reserves and approach the scale at which some reef fisheries operate. We suggest that accurate scaling is due to the type of species interaction causing local density dependence and the fact that localized events can be aggregated to describe larger-scale interactions with minimal distortion. Careful extrapolation from small-scale experiments identifying species interactions and their effects should improve our ability to predict the outcomes of alternative management strategies for coral reef fishes and their habitats.

  16. Effects of the inlet conditions and blood models on accurate prediction of hemodynamics in the stented coronary arteries

    NASA Astrophysics Data System (ADS)

    Jiang, Yongfei; Zhang, Jun; Zhao, Wanhua

    2015-05-01

    Hemodynamics altered by stent implantation is well-known to be closely related to in-stent restenosis. Computational fluid dynamics (CFD) method has been used to investigate the hemodynamics in stented arteries in detail and help to analyze the performances of stents. In this study, blood models with Newtonian or non-Newtonian properties were numerically investigated for the hemodynamics at steady or pulsatile inlet conditions respectively employing CFD based on the finite volume method. The results showed that the blood model with non-Newtonian property decreased the area of low wall shear stress (WSS) compared with the blood model with Newtonian property and the magnitude of WSS varied with the magnitude and waveform of the inlet velocity. The study indicates that the inlet conditions and blood models are all important for accurately predicting the hemodynamics. This will be beneficial to estimate the performances of stents and also help clinicians to select the proper stents for the patients.

  17. A novel fibrosis index comprising a non-cholesterol sterol accurately predicts HCV-related liver cirrhosis.

    PubMed

    Ydreborg, Magdalena; Lisovskaja, Vera; Lagging, Martin; Brehm Christensen, Peer; Langeland, Nina; Buhl, Mads Rauning; Pedersen, Court; Mørch, Kristine; Wejstål, Rune; Norkrans, Gunnar; Lindh, Magnus; Färkkilä, Martti; Westin, Johan

    2014-01-01

    Diagnosis of liver cirrhosis is essential in the management of chronic hepatitis C virus (HCV) infection. Liver biopsy is invasive and thus entails a risk of complications as well as a potential risk of sampling error. Therefore, non-invasive diagnostic tools are preferential. The aim of the present study was to create a model for accurate prediction of liver cirrhosis based on patient characteristics and biomarkers of liver fibrosis, including a panel of non-cholesterol sterols reflecting cholesterol synthesis and absorption and secretion. We evaluated variables with potential predictive significance for liver fibrosis in 278 patients originally included in a multicenter phase III treatment trial for chronic HCV infection. A stepwise multivariate logistic model selection was performed with liver cirrhosis, defined as Ishak fibrosis stage 5-6, as the outcome variable. A new index, referred to as Nordic Liver Index (NoLI) in the paper, was based on the model: Log-odds (predicting cirrhosis) = -12.17+ (age × 0.11) + (BMI (kg/m(2)) × 0.23) + (D7-lathosterol (μg/100 mg cholesterol)×(-0.013)) + (Platelet count (x10(9)/L) × (-0.018)) + (Prothrombin-INR × 3.69). The area under the ROC curve (AUROC) for prediction of cirrhosis was 0.91 (95% CI 0.86-0.96). The index was validated in a separate cohort of 83 patients and the AUROC for this cohort was similar (0.90; 95% CI: 0.82-0.98). In conclusion, the new index may complement other methods in diagnosing cirrhosis in patients with chronic HCV infection.

  18. Prediction of critical heat flux in water-cooled plasma facing components using computational fluid dynamics.

    SciTech Connect

    Bullock, James H.; Youchison, Dennis Lee; Ulrickson, Michael Andrew

    2010-11-01

    Several commercial computational fluid dynamics (CFD) codes now have the capability to analyze Eulerian two-phase flow using the Rohsenow nucleate boiling model. Analysis of boiling due to one-sided heating in plasma facing components (pfcs) is now receiving attention during the design of water-cooled first wall panels for ITER that may encounter heat fluxes as high as 5 MW/m2. Empirical thermalhydraulic design correlations developed for long fission reactor channels are not reliable when applied to pfcs because fully developed flow conditions seldom exist. Star-CCM+ is one of the commercial CFD codes that can model two-phase flows. Like others, it implements the RPI model for nucleate boiling, but it also seamlessly transitions to a volume-of-fluid model for film boiling. By benchmarking the results of our 3d models against recent experiments on critical heat flux for both smooth rectangular channels and hypervapotrons, we determined the six unique input parameters that accurately characterize the boiling physics for ITER flow conditions under a wide range of absorbed heat flux. We can now exploit this capability to predict the onset of critical heat flux in these components. In addition, the results clearly illustrate the production and transport of vapor and its effect on heat transfer in pfcs from nucleate boiling through transition to film boiling. This article describes the boiling physics implemented in CCM+ and compares the computational results to the benchmark experiments carried out independently in the United States and Russia. Temperature distributions agreed to within 10 C for a wide range of heat fluxes from 3 MW/m2 to 10 MW/m2 and flow velocities from 1 m/s to 10 m/s in these devices. Although the analysis is incapable of capturing the stochastic nature of critical heat flux (i.e., time and location may depend on a local materials defect or turbulence phenomenon), it is highly reliable in determining the heat flux where boiling instabilities begin

  19. Accurate electrical prediction of memory array through SEM-based edge-contour extraction using SPICE simulation

    NASA Astrophysics Data System (ADS)

    Shauly, Eitan; Rotstein, Israel; Peltinov, Ram; Latinski, Sergei; Adan, Ofer; Levi, Shimon; Menadeva, Ovadya

    2009-03-01

    The continues transistors scaling efforts, for smaller devices, similar (or larger) drive current/um and faster devices, increase the challenge to predict and to control the transistor off-state current. Typically, electrical simulators like SPICE, are using the design intent (as-drawn GDS data). At more sophisticated cases, the simulators are fed with the pattern after lithography and etch process simulations. As the importance of electrical simulation accuracy is increasing and leakage is becoming more dominant, there is a need to feed these simulators, with more accurate information extracted from physical on-silicon transistors. Our methodology to predict changes in device performances due to systematic lithography and etch effects was used in this paper. In general, the methodology consists on using the OPCCmaxTM for systematic Edge-Contour-Extraction (ECE) from transistors, taking along the manufacturing and includes any image distortions like line-end shortening, corner rounding and line-edge roughness. These measurements are used for SPICE modeling. Possible application of this new metrology is to provide a-head of time, physical and electrical statistical data improving time to market. In this work, we applied our methodology to analyze a small and large array's of 2.14um2 6T-SRAM, manufactured using Tower Standard Logic for General Purposes Platform. 4 out of the 6 transistors used "U-Shape AA", known to have higher variability. The predicted electrical performances of the transistors drive current and leakage current, in terms of nominal values and variability are presented. We also used the methodology to analyze an entire SRAM Block array. Study of an isolation leakage and variability are presented.

  20. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  1. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  2. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance.

    PubMed

    Majaj, Najib J; Hong, Ha; Solomon, Ethan A; DiCarlo, James J

    2015-09-30

    database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior.

  3. Practical theories for service life prediction of critical aerospace structural components

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Monaghan, Richard C.; Jackson, Raymond H.

    1992-01-01

    A new second-order theory was developed for predicting the service lives of aerospace structural components. The predictions based on this new theory were compared with those based on the Ko first-order theory and the classical theory of service life predictions. The new theory gives very accurate service life predictions. An equivalent constant-amplitude stress cycle method was proposed for representing the random load spectrum for crack growth calculations. This method predicts the most conservative service life. The proposed use of minimum detectable crack size, instead of proof load established crack size as an initial crack size for crack growth calculations, could give a more realistic service life.

  4. Computational finite element bone mechanics accurately predicts mechanical competence in the human radius of an elderly population.

    PubMed

    Mueller, Thomas L; Christen, David; Sandercott, Steve; Boyd, Steven K; van Rietbergen, Bert; Eckstein, Felix; Lochmüller, Eva-Maria; Müller, Ralph; van Lenthe, G Harry

    2011-06-01

    High-resolution peripheral quantitative computed tomography (HR-pQCT) is clinically available today and provides a non-invasive measure of 3D bone geometry and micro-architecture with unprecedented detail. In combination with microarchitectural finite element (μFE) models it can be used to determine bone strength using a strain-based failure criterion. Yet, images from only a relatively small part of the radius are acquired and it is not known whether the region recommended for clinical measurements does predict forearm fracture load best. Furthermore, it is questionable whether the currently used failure criterion is optimal because of improvements in image resolution, changes in the clinically measured volume of interest, and because the failure criterion depends on the amount of bone present. Hence, we hypothesized that bone strength estimates would improve by measuring a region closer to the subchondral plate, and by defining a failure criterion that would be independent of the measured volume of interest. To answer our hypotheses, 20% of the distal forearm length from 100 cadaveric but intact human forearms was measured using HR-pQCT. μFE bone strength was analyzed for different subvolumes, as well as for the entire 20% of the distal radius length. Specifically, failure criteria were developed that provided accurate estimates of bone strength as assessed experimentally. It was shown that distal volumes were better in predicting bone strength than more proximal ones. Clinically speaking, this would argue to move the volume of interest for the HR-pQCT measurements even more distally than currently recommended by the manufacturer. Furthermore, new parameter settings using the strain-based failure criterion are presented providing better accuracy for bone strength estimates.

  5. A Support Vector Machine model for the prediction of proteotypic peptides for accurate mass and time proteomics

    SciTech Connect

    Webb-Robertson, Bobbie-Jo M.; Cannon, William R.; Oehmen, Christopher S.; Shah, Anuj R.; Gurumoorthi, Vidhya; Lipton, Mary S.; Waters, Katrina M.

    2008-07-01

    Motivation: The standard approach to identifying peptides based on accurate mass and elution time (AMT) compares these profiles obtained from a high resolution mass spectrometer to a database of peptides previously identified from tandem mass spectrometry (MS/MS) studies. It would be advantageous, with respect to both accuracy and cost, to only search for those peptides that are detectable by MS (proteotypic). Results: We present a Support Vector Machine (SVM) model that uses a simple descriptor space based on 35 properties of amino acid content, charge, hydrophilicity, and polarity for the quantitative prediction of proteotypic peptides. Using three independently derived AMT databases (Shewanella oneidensis, Salmonella typhimurium, Yersinia pestis) for training and validation within and across species, the SVM resulted in an average accuracy measure of ~0.8 with a standard deviation of less than 0.025. Furthermore, we demonstrate that these results are achievable with a small set of 12 variables and can achieve high proteome coverage. Availability: http://omics.pnl.gov/software/STEPP.php

  6. Extended Aging Theories for Predictions of Safe Operational Life of Critical Airborne Structural Components

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Chen, Tony

    2006-01-01

    The previously developed Ko closed-form aging theory has been reformulated into a more compact mathematical form for easier application. A new equivalent loading theory and empirical loading theories have also been developed and incorporated into the revised Ko aging theory for the prediction of a safe operational life of airborne failure-critical structural components. The new set of aging and loading theories were applied to predict the safe number of flights for the B-52B aircraft to carry a launch vehicle, the structural life of critical components consumed by load excursion to proof load value, and the ground-sitting life of B-52B pylon failure-critical structural components. A special life prediction method was developed for the preflight predictions of operational life of failure-critical structural components of the B-52H pylon system, for which no flight data are available.

  7. In Eating-Disordered Inpatient Adolescents, Self-Criticism Predicts Nonsuicidal Self-Injury.

    PubMed

    Itzhaky, Liat; Shahar, Golan; Stein, Daniel; Fennig, Silvana

    2016-08-01

    We examined the role of depressive traits-self-criticism and dependency-in nonsuicidal self-injury (NSSI) and suicidal ideation among inpatient adolescents with eating disorders. In two studies (N = 103 and 55), inpatients were assessed for depressive traits, suicidal ideations, and NSSI. In Study 2, motivation for carrying out NSSI was also assessed. In both studies, depression predicted suicidal ideation and self-criticism predicted NSSI. In Study 2, depression and suicidal ideation also predicted NSSI. The automatic positive motivation for NSSI was predicted by dependency and depressive symptoms, and by a two-way interaction between self-criticism and dependency. Consistent with the "self-punishment model," self-criticism appears to constitute a dimension of vulnerability for NSSI. PMID:26475665

  8. Predicting intermittent running performance: critical velocity versus endurance index.

    PubMed

    Buchheit, M; Laursen, P B; Millet, G P; Pactat, F; Ahmaidi, S

    2008-04-01

    The aim of the present study was to examine the ability of the critical velocity (CV) and the endurance index (EI) to assess endurance performance during intermittent exercise. Thirteen subjects performed two intermittent runs: 15-s runs intersected with 15 s of passive recovery (15/15) and 30-s runs with 30-s rest (30/30). Runs were performed until exhaustion at three intensities (100, 95 and 90 % of the speed reached at the end of the 30 - 15 intermittent fitness test, V (IFT)) to calculate i) CV from the slope of the linear relationship between the total covered distance and exhaustion time (ET) (iCV); ii) anaerobic distance capacity from the Y-intercept of the distance/duration relationship (iADC); and iii) EI from the relationship between the fraction of V (IFT) at which the runs were performed and the log-transformed ET (iEI). Anaerobic capacity was indirectly assessed by the final velocity achieved during the Maximal Anaerobic Running Test (VMART). ET was longer for 15/15 than for 30/30 runs at similar intensities. iCV (15/15) and iCV (30/30) were not influenced by changes in ET and were highly dependent on V (IFT). Neither iADC (15/15) nor iADC (30/30) were related to VMART. In contrast, iEI (15/15) was higher than iEI (30/30), and corresponded with the higher ET. In conclusion, only iEI estimated endurance capacity during repeated intermittent running.

  9. High IFIT1 expression predicts improved clinical outcome, and IFIT1 along with MGMT more accurately predicts prognosis in newly diagnosed glioblastoma.

    PubMed

    Zhang, Jin-Feng; Chen, Yao; Lin, Guo-Shi; Zhang, Jian-Dong; Tang, Wen-Long; Huang, Jian-Huang; Chen, Jin-Shou; Wang, Xing-Fu; Lin, Zhi-Xiong

    2016-06-01

    Interferon-induced protein with tetratricopeptide repeat 1 (IFIT1) plays a key role in growth suppression and apoptosis promotion in cancer cells. Interferon was reported to induce the expression of IFIT1 and inhibit the expression of O-6-methylguanine-DNA methyltransferase (MGMT).This study aimed to investigate the expression of IFIT1, the correlation between IFIT1 and MGMT, and their impact on the clinical outcome in newly diagnosed glioblastoma. The expression of IFIT1 and MGMT and their correlation were investigated in the tumor tissues from 70 patients with newly diagnosed glioblastoma. The effects on progression-free survival and overall survival were evaluated. Of 70 cases, 57 (81.4%) tissue samples showed high expression of IFIT1 by immunostaining. The χ(2) test indicated that the expression of IFIT1 and MGMT was negatively correlated (r = -0.288, P = .016). Univariate and multivariate analyses confirmed high IFIT1 expression as a favorable prognostic indicator for progression-free survival (P = .005 and .017) and overall survival (P = .001 and .001), respectively. Patients with 2 favorable factors (high IFIT1 and low MGMT) had an improved prognosis as compared with others. The results demonstrated significantly increased expression of IFIT1 in newly diagnosed glioblastoma tissue. The negative correlation between IFIT1 and MGMT expression may be triggered by interferon. High IFIT1 can be a predictive biomarker of favorable clinical outcome, and IFIT1 along with MGMT more accurately predicts prognosis in newly diagnosed glioblastoma. PMID:26980050

  10. A time accurate prediction of the viscous flow in a turbine stage including a rotor in motion

    NASA Astrophysics Data System (ADS)

    Shavalikul, Akamol

    accurate flow characteristics in the NGV domain and the rotor domain with less computational time and computer memory requirements. In contrast, the time accurate flow simulation can predict all unsteady flow characteristics occurring in the turbine stage, but with high computational resource requirements. (Abstract shortened by UMI.)

  11. Accuracy of critical-temperature sensitivity coefficients predicted by multilayered composite plate theories

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Burton, Scott

    1992-01-01

    An assessment is made of the accuracy of the critical-temperature sensitivity coefficients of multilayered plates predicted by different modeling approaches, based on two-dimensional shear-deformation theories. The sensitivity coefficients considered measure the sensitivity of the critical temperatures to variations in different lamination and material parameters of the plate. The standard of comparison is taken to be the sensitivity coefficients obtained by the three-dimensional theory of thermoelasticity. Numerical studies are presented showing the effects of variation in the geometric and lamination parameters of the plate on the accuracy of both the sensitivity coefficients and the critical temperatures predicted by the different modeling approaches.

  12. A Systematic Review of Predictions of Survival in Palliative Care: How Accurate Are Clinicians and Who Are the Experts?

    PubMed Central

    Harris, Adam; Harries, Priscilla

    2016-01-01

    overall accuracy being reported. Data were extracted using a standardised tool, by one reviewer, which could have introduced bias. Devising search terms for prognostic studies is challenging. Every attempt was made to devise search terms that were sufficiently sensitive to detect all prognostic studies; however, it remains possible that some studies were not identified. Conclusion Studies of prognostic accuracy in palliative care are heterogeneous, but the evidence suggests that clinicians’ predictions are frequently inaccurate. No sub-group of clinicians was consistently shown to be more accurate than any other. Implications of Key Findings Further research is needed to understand how clinical predictions are formulated and how their accuracy can be improved. PMID:27560380

  13. Predicting critical temperatures of iron(II) spin crossover materials: density functional theory plus U approach.

    PubMed

    Zhang, Yachao

    2014-12-01

    A first-principles study of critical temperatures (T(c)) of spin crossover (SCO) materials requires accurate description of the strongly correlated 3d electrons as well as much computational effort. This task is still a challenge for the widely used local density or generalized gradient approximations (LDA/GGA) and hybrid functionals. One remedy, termed density functional theory plus U (DFT+U) approach, introduces a Hubbard U term to deal with the localized electrons at marginal computational cost, while treats the delocalized electrons with LDA/GGA. Here, we employ the DFT+U approach to investigate the T(c) of a pair of iron(II) SCO molecular crystals (α and β phase), where identical constituent molecules are packed in different ways. We first calculate the adiabatic high spin-low spin energy splitting ΔE(HL) and molecular vibrational frequencies in both spin states, then obtain the temperature dependent enthalpy and entropy changes (ΔH and ΔS), and finally extract T(c) by exploiting the ΔH/T - T and ΔS - T relationships. The results are in agreement with experiment. Analysis of geometries and electronic structures shows that the local ligand field in the α phase is slightly weakened by the H-bondings involving the ligand atoms and the specific crystal packing style. We find that this effect is largely responsible for the difference in T(c) of the two phases. This study shows the applicability of the DFT+U approach for predicting T(c) of SCO materials, and provides a clear insight into the subtle influence of the crystal packing effects on SCO behavior.

  14. Predicting critical temperatures of iron(II) spin crossover materials: Density functional theory plus U approach

    SciTech Connect

    Zhang, Yachao

    2014-12-07

    A first-principles study of critical temperatures (T{sub c}) of spin crossover (SCO) materials requires accurate description of the strongly correlated 3d electrons as well as much computational effort. This task is still a challenge for the widely used local density or generalized gradient approximations (LDA/GGA) and hybrid functionals. One remedy, termed density functional theory plus U (DFT+U) approach, introduces a Hubbard U term to deal with the localized electrons at marginal computational cost, while treats the delocalized electrons with LDA/GGA. Here, we employ the DFT+U approach to investigate the T{sub c} of a pair of iron(II) SCO molecular crystals (α and β phase), where identical constituent molecules are packed in different ways. We first calculate the adiabatic high spin-low spin energy splitting ΔE{sub HL} and molecular vibrational frequencies in both spin states, then obtain the temperature dependent enthalpy and entropy changes (ΔH and ΔS), and finally extract T{sub c} by exploiting the ΔH/T − T and ΔS − T relationships. The results are in agreement with experiment. Analysis of geometries and electronic structures shows that the local ligand field in the α phase is slightly weakened by the H-bondings involving the ligand atoms and the specific crystal packing style. We find that this effect is largely responsible for the difference in T{sub c} of the two phases. This study shows the applicability of the DFT+U approach for predicting T{sub c} of SCO materials, and provides a clear insight into the subtle influence of the crystal packing effects on SCO behavior.

  15. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses--Criticality (keff) Predictions

    SciTech Connect

    Scaglione, John M; Mueller, Don; Wagner, John C

    2011-01-01

    One of the most significant remaining challenges associated with expanded implementation of burnup credit in the United States is the validation of depletion and criticality calculations used in the safety evaluation - in particular, the availability and use of applicable measured data to support validation, especially for fission products. Applicants and regulatory reviewers have been constrained by both a scarcity of data and a lack of clear technical basis or approach for use of the data. U.S. Nuclear Regulatory Commission (NRC) staff have noted that the rationale for restricting their Interim Staff Guidance on burnup credit (ISG-8) to actinide-only is based largely on the lack of clear, definitive experiments that can be used to estimate the bias and uncertainty for computational analyses associated with using burnup credit. To address the issue of validation, the NRC initiated a project with the Oak Ridge National Laboratory to (1) develop and establish a technically sound validation approach (both depletion and criticality) for commercial spent nuclear fuel (SNF) criticality safety evaluations based on best-available data and methods and (2) apply the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The purpose of this paper is to describe the criticality (k{sub eff}) validation approach, and resulting observations and recommendations. Validation of the isotopic composition (depletion) calculations is addressed in a companion paper at this conference. For criticality validation, the approach is to utilize (1) available laboratory critical experiment (LCE) data from the International Handbook of Evaluated Criticality Safety Benchmark Experiments and the French Haut Taux de Combustion (HTC) program to support validation of the principal actinides and (2) calculated sensitivities, nuclear data uncertainties, and the limited available fission

  16. The Predictive Validity of Critical Thinking Disposition on Middle-Grades Math Achievement

    ERIC Educational Resources Information Center

    LaVenia, Mark; Pineau, Kristina N.; Lang, Laura B.

    2010-01-01

    This study investigates the predictive validity of students' levels of critical thinking disposition, as measured by the California Measure of Mental Motivation (CM3; Giancarlo, Blohm, & Urdan, 2004) for students' math achievement, as measured by the Mathematics assessment included in the Florida Comprehensive Assessment Test (FCAT). The CM3 is a…

  17. Critical Features Predicting Sustained Implementation of School-Wide Positive Behavioral Interventions and Supports

    ERIC Educational Resources Information Center

    Mathews, Susanna; McIntosh, Kent; Frank, Jennifer L.; May, Seth L.

    2014-01-01

    The current study explored the extent to which a common measure of perceived implementation of critical features of Positive Behavioral Interventions and Supports (PBIS) predicted fidelity of implementation 3 years later. Respondents included school personnel from 261 schools across the United States implementing PBIS. School teams completed the…

  18. Critical Features Predicting Sustained Implementation of School-Wide Positive Behavior Support

    ERIC Educational Resources Information Center

    Mathews, Susanna; McIntosh, Kent; Frank, Jennifer; May, Seth

    2014-01-01

    The current study explored the extent to which a common measure of perceived implementation of critical features of School-wide Positive Behavior Support (SWPBS) predicted fidelity of implementation 3 years later. Respondents included school personnel from 261 schools across the United States implementing SWPBS. School teams completed the…

  19. ACCEPT: Introduction of the Adverse Condition and Critical Event Prediction Toolbox

    NASA Technical Reports Server (NTRS)

    Martin, Rodney A.; Santanu, Das; Janakiraman, Vijay Manikandan; Hosein, Stefan

    2015-01-01

    The prediction of anomalies or adverse events is a challenging task, and there are a variety of methods which can be used to address the problem. In this paper, we introduce a generic framework developed in MATLAB (sup registered mark) called ACCEPT (Adverse Condition and Critical Event Prediction Toolbox). ACCEPT is an architectural framework designed to compare and contrast the performance of a variety of machine learning and early warning algorithms, and tests the capability of these algorithms to robustly predict the onset of adverse events in any time-series data generating systems or processes.

  20. Prediction of critical illness in elderly outpatients using elder risk assessment: a population-based study

    PubMed Central

    Biehl, Michelle; Takahashi, Paul Y; Cha, Stephen S; Chaudhry, Rajeev; Gajic, Ognjen; Thorsteinsdottir, Bjorg

    2016-01-01

    Rationale Identifying patients at high risk of critical illness is necessary for the development and testing of strategies to prevent critical illness. The aim of this study was to determine the relationship between high elder risk assessment (ERA) score and critical illness requiring intensive care and to see if the ERA can be used as a prediction tool to identify elderly patients at the primary care visit who are at high risk of critical illness. Methods A population-based historical cohort study was conducted in elderly patients (age >65 years) identified at the time of primary care visit in Rochester, MN, USA. Predictors including age, previous hospital days, and comorbid health conditions were identified from routine administrative data available in the electronic medical record. The main outcome was critical illness, defined as sepsis, need for mechanical ventilation, or death within 2 years of initial visit. Patients with an ERA score of 16 were considered to be at high risk. The discrimination of the ERA score was assessed using area under the receiver operating characteristic curve. Results Of the 13,457 eligible patients, 9,872 gave consent for medical record review and had full information on intensive care unit utilization. The mean age was 75.8 years (standard deviation ±7.6 years), and 58% were female, 94% were Caucasian, 62% were married, and 13% were living in nursing homes. In the overall group, 417 patients (4.2%) suffered from critical illness. In the 1,134 patients with ERA >16, 154 (14%) suffered from critical illness. An ERA score ≥16 predicted critical illness (odds ratio 6.35; 95% confidence interval 3.51–11.48). The area under the receiver operating characteristic curve was 0.75, which indicated good discrimination. Conclusion A simple model based on easily obtainable administrative data predicted critical illness in the next 2 years in elderly outpatients with up to 14% of the highest risk population suffering from critical illness

  1. Anxiety Interacts With Expressed Emotion Criticism in the Prediction of Psychotic Symptom Exacerbation

    PubMed Central

    Docherty, Nancy M.; St-Hilaire, Annie; Aakre, Jennifer M.; Seghers, James P.; McCleery, Amanda; Divilbiss, Marielle

    2011-01-01

    Psychotic symptoms are exacerbated by social stressors in schizophrenia and schizoaffective disorder patients as a group. More specifically, critical attitudes toward patients on the part of family members and others have been associated with a higher risk of relapse in the patients. Some patients appear to be especially vulnerable in this regard. One variable that could affect the degree of sensitivity to a social stressor such as criticism is the individual’s level of anxiety. The present longitudinal study assessed 27 relatively stable outpatients with schizophrenia or schizoaffective disorder and the single “most influential other” (MIO) person for each patient. As hypothesized, (1) patients with high critical MIOs showed increases in psychotic symptoms over time, compared with patients with low critical MIOs; (2) patients high in anxiety at the baseline assessment showed increases in psychotic symptoms at follow-up, compared with patients low in anxiety, and (3) patients with high levels of anxiety at baseline and high critical MIOs showed the greatest exacerbation of psychotic symptoms over time. Objectively measured levels of criticism were more predictive than patient-rated levels of criticism. PMID:19892819

  2. An evolutionary model-based algorithm for accurate phylogenetic breakpoint mapping and subtype prediction in HIV-1.

    PubMed

    Kosakovsky Pond, Sergei L; Posada, David; Stawiski, Eric; Chappey, Colombe; Poon, Art F Y; Hughes, Gareth; Fearnhill, Esther; Gravenor, Mike B; Leigh Brown, Andrew J; Frost, Simon D W

    2009-11-01

    Genetically diverse pathogens (such as Human Immunodeficiency virus type 1, HIV-1) are frequently stratified into phylogenetically or immunologically defined subtypes for classification purposes. Computational identification of such subtypes is helpful in surveillance, epidemiological analysis and detection of novel variants, e.g., circulating recombinant forms in HIV-1. A number of conceptually and technically different techniques have been proposed for determining the subtype of a query sequence, but there is not a universally optimal approach. We present a model-based phylogenetic method for automatically subtyping an HIV-1 (or other viral or bacterial) sequence, mapping the location of breakpoints and assigning parental sequences in recombinant strains as well as computing confidence levels for the inferred quantities. Our Subtype Classification Using Evolutionary ALgorithms (SCUEAL) procedure is shown to perform very well in a variety of simulation scenarios, runs in parallel when multiple sequences are being screened, and matches or exceeds the performance of existing approaches on typical empirical cases. We applied SCUEAL to all available polymerase (pol) sequences from two large databases, the Stanford Drug Resistance database and the UK HIV Drug Resistance Database. Comparing with subtypes which had previously been assigned revealed that a minor but substantial (approximately 5%) fraction of pure subtype sequences may in fact be within- or inter-subtype recombinants. A free implementation of SCUEAL is provided as a module for the HyPhy package and the Datamonkey web server. Our method is especially useful when an accurate automatic classification of an unknown strain is desired, and is positioned to complement and extend faster but less accurate methods. Given the increasingly frequent use of HIV subtype information in studies focusing on the effect of subtype on treatment, clinical outcome, pathogenicity and vaccine design, the importance of accurate

  3. Predicting the unpredictable: critical analysis and practical implications of predictive anticipatory activity

    PubMed Central

    Mossbridge, Julia A.; Tressoldi, Patrizio; Utts, Jessica; Ives, John A.; Radin, Dean; Jonas, Wayne B.

    2014-01-01

    A recent meta-analysis of experiments from seven independent laboratories (n = 26) indicates that the human body can apparently detect randomly delivered stimuli occurring 1–10 s in the future (Mossbridge etal., 2012). The key observation in these studies is that human physiology appears to be able to distinguish between unpredictable dichotomous future stimuli, such as emotional vs. neutral images or sound vs. silence. This phenomenon has been called presentiment (as in “feeling the future”). In this paper we call it predictive anticipatory activity (PAA). The phenomenon is “predictive” because it can distinguish between upcoming stimuli; it is “anticipatory” because the physiological changes occur before a future event; and it is an “activity” because it involves changes in the cardiopulmonary, skin, and/or nervous systems. PAA is an unconscious phenomenon that seems to be a time-reversed reflection of the usual physiological response to a stimulus. It appears to resemble precognition (consciously knowing something is going to happen before it does), but PAA specifically refers to unconscious physiological reactions as opposed to conscious premonitions. Though it is possible that PAA underlies the conscious experience of precognition, experiments testing this idea have not produced clear results. The first part of this paper reviews the evidence for PAA and examines the two most difficult challenges for obtaining valid evidence for it: expectation bias and multiple analyses. The second part speculates on possible mechanisms and the theoretical implications of PAA for understanding physiology and consciousness. The third part examines potential practical applications. PMID:24723870

  4. Conformations of 1,2-dimethoxypropane and 5-methoxy-1,3-dioxane: are ab initio quantum chemistry predictions accurate?

    NASA Astrophysics Data System (ADS)

    Smith, Grant D.; Jaffe, Richard L.; Yoon, Do. Y.

    1998-06-01

    High-level ab initio quantum chemistry calculations are shown to predict conformer populations of 1,2-dimethoxypropane and 5-methoxy-1,3-dioxane that are consistent with gas-phase NMR vicinal coupling constant measurements. The conformational energies of the cyclic ether 5-methoxy-1,3-dioxane are found to be consistent with those predicted by a rotational isomeric state (RIS) model based upon the acyclic analog 1,2-dimethoxypropane. The quantum chemistry and RIS calculations indicate the presence of strong attractive 1,5 C(H 3)⋯O electrostatic interactions in these molecules, similar to those found in 1,2-dimethoxyethane.

  5. A Maximal Graded Exercise Test to Accurately Predict VO2max in 18-65-Year-Old Adults

    ERIC Educational Resources Information Center

    George, James D.; Bradshaw, Danielle I.; Hyde, Annette; Vehrs, Pat R.; Hager, Ronald L.; Yanowitz, Frank G.

    2007-01-01

    The purpose of this study was to develop an age-generalized regression model to predict maximal oxygen uptake (VO sub 2 max) based on a maximal treadmill graded exercise test (GXT; George, 1996). Participants (N = 100), ages 18-65 years, reached a maximal level of exertion (mean plus or minus standard deviation [SD]; maximal heart rate [HR sub…

  6. Survival outcomes scores (SOFT, BAR, and Pedi-SOFT) are accurate in predicting post-liver transplant survival in adolescents.

    PubMed

    Conjeevaram Selvakumar, Praveen Kumar; Maksimak, Brian; Hanouneh, Ibrahim; Youssef, Dalia H; Lopez, Rocio; Alkhouri, Naim

    2016-09-01

    SOFT and BAR scores utilize recipient, donor, and graft factors to predict the 3-month survival after LT in adults (≥18 years). Recently, Pedi-SOFT score was developed to predict 3-month survival after LT in young children (≤12 years). These scoring systems have not been studied in adolescent patients (13-17 years). We evaluated the accuracy of these scoring systems in predicting the 3-month post-LT survival in adolescents through a retrospective analysis of data from UNOS of patients aged 13-17 years who received LT between 03/01/2002 and 12/31/2012. Recipients of combined organ transplants, donation after cardiac death, or living donor graft were excluded. A total of 711 adolescent LT recipients were included with a mean age of 15.2±1.4 years. A total of 100 patients died post-LT including 33 within 3 months. SOFT, BAR, and Pedi-SOFT scores were all found to be good predictors of 3-month post-transplant survival outcome with areas under the ROC curve of 0.81, 0.80, and 0.81, respectively. All three scores provided good accuracy for predicting 3-month survival post-LT in adolescents and may help clinical decision making to optimize survival rate and organ utilization. PMID:27478012

  7. Is demography destiny? Application of machine learning techniques to accurately predict population health outcomes from a minimal demographic dataset.

    PubMed

    Luo, Wei; Nguyen, Thin; Nichols, Melanie; Tran, Truyen; Rana, Santu; Gupta, Sunil; Phung, Dinh; Venkatesh, Svetha; Allender, Steve

    2015-01-01

    For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC) from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD) outcomes (four NCDs and two major clinical risk factors), based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88) and those excluded from the development for use as a completely separated validation sample (median correlation 0.85), demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease.

  8. Genomic Models of Short-Term Exposure Accurately Predict Long-Term Chemical Carcinogenicity and Identify Putative Mechanisms of Action

    PubMed Central

    Gusenleitner, Daniel; Auerbach, Scott S.; Melia, Tisha; Gómez, Harold F.; Sherr, David H.; Monti, Stefano

    2014-01-01

    Background Despite an overall decrease in incidence of and mortality from cancer, about 40% of Americans will be diagnosed with the disease in their lifetime, and around 20% will die of it. Current approaches to test carcinogenic chemicals adopt the 2-year rodent bioassay, which is costly and time-consuming. As a result, fewer than 2% of the chemicals on the market have actually been tested. However, evidence accumulated to date suggests that gene expression profiles from model organisms exposed to chemical compounds reflect underlying mechanisms of action, and that these toxicogenomic models could be used in the prediction of chemical carcinogenicity. Results In this study, we used a rat-based microarray dataset from the NTP DrugMatrix Database to test the ability of toxicogenomics to model carcinogenicity. We analyzed 1,221 gene-expression profiles obtained from rats treated with 127 well-characterized compounds, including genotoxic and non-genotoxic carcinogens. We built a classifier that predicts a chemical's carcinogenic potential with an AUC of 0.78, and validated it on an independent dataset from the Japanese Toxicogenomics Project consisting of 2,065 profiles from 72 compounds. Finally, we identified differentially expressed genes associated with chemical carcinogenesis, and developed novel data-driven approaches for the molecular characterization of the response to chemical stressors. Conclusion Here, we validate a toxicogenomic approach to predict carcinogenicity and provide strong evidence that, with a larger set of compounds, we should be able to improve the sensitivity and specificity of the predictions. We found that the prediction of carcinogenicity is tissue-dependent and that the results also confirm and expand upon previous studies implicating DNA damage, the peroxisome proliferator-activated receptor, the aryl hydrocarbon receptor, and regenerative pathology in the response to carcinogen exposure. PMID:25058030

  9. Length of sick leave – Why not ask the sick-listed? Sick-listed individuals predict their length of sick leave more accurately than professionals

    PubMed Central

    Fleten, Nils; Johnsen, Roar; Førde, Olav Helge

    2004-01-01

    Background The knowledge of factors accurately predicting the long lasting sick leaves is sparse, but information on medical condition is believed to be necessary to identify persons at risk. Based on the current practice, with identifying sick-listed individuals at risk of long-lasting sick leaves, the objectives of this study were to inquire the diagnostic accuracy of length of sick leaves predicted in the Norwegian National Insurance Offices, and to compare their predictions with the self-predictions of the sick-listed. Methods Based on medical certificates, two National Insurance medical consultants and two National Insurance officers predicted, at day 14, the length of sick leave in 993 consecutive cases of sick leave, resulting from musculoskeletal or mental disorders, in this 1-year follow-up study. Two months later they reassessed 322 cases based on extended medical certificates. Self-predictions were obtained in 152 sick-listed subjects when their sick leave passed 14 days. Diagnostic accuracy of the predictions was analysed by ROC area, sensitivity, specificity, likelihood ratio, and positive predictive value was included in the analyses of predictive validity. Results The sick-listed identified sick leave lasting 12 weeks or longer with an ROC area of 80.9% (95% CI 73.7–86.8), while the corresponding estimates for medical consultants and officers had ROC areas of 55.6% (95% CI 45.6–65.6%) and 56.0% (95% CI 46.6–65.4%), respectively. The predictions of sick-listed males were significantly better than those of female subjects, and older subjects predicted somewhat better than younger subjects. Neither formal medical competence, nor additional medical information, noticeably improved the diagnostic accuracy based on medical certificates. Conclusion This study demonstrates that the accuracy of a prognosis based on medical documentation in sickness absence forms, is lower than that of one based on direct communication with the sick-listed themselves

  10. The Corrected Simulation Method of Critical Heat Flux Prediction for Water-Cooled Divertor Based on Euler Homogeneous Model

    NASA Astrophysics Data System (ADS)

    Zhang, Jingyang; Han, Le; Chang, Haiping; Liu, Nan; Xu, Tiejun

    2016-02-01

    An accurate critical heat flux (CHF) prediction method is the key factor for realizing the steady-state operation of a water-cooled divertor that works under one-sided high heating flux conditions. An improved CHF prediction method based on Euler's homogeneous model for flow boiling combined with realizable k-ɛ model for single-phase flow is adopted in this paper in which time relaxation coefficients are corrected by the Hertz-Knudsen formula in order to improve the calculation accuracy of vapor-liquid conversion efficiency under high heating flux conditions. Moreover, local large differences of liquid physical properties due to the extreme nonuniform heating flux on cooling wall along the circumference direction are revised by formula IAPWS-IF97. Therefore, this method can improve the calculation accuracy of heat and mass transfer between liquid phase and vapor phase in a CHF prediction simulation of water-cooled divertors under the one-sided high heating condition. An experimental example is simulated based on the improved and the uncorrected methods. The simulation results, such as temperature, void fraction and heat transfer coefficient, are analyzed to achieve the CHF prediction. The results show that the maximum error of CHF based on the improved method is 23.7%, while that of CHF based on uncorrected method is up to 188%, as compared with the experiment results of Ref. [12]. Finally, this method is verified by comparison with the experimental data obtained by International Thermonuclear Experimental Reactor (ITER), with a maximum error of 6% only. This method provides an efficient tool for the CHF prediction of water-cooled divertors. supported by the National Magnetic Confinement Fusion Science Program of China (No. 2010GB104005) and National Natural Science Foundation of China (No. 51406085)

  11. Accurate and efficient prediction of fine-resolution hydrologic and carbon dynamic simulations from coarse-resolution models

    NASA Astrophysics Data System (ADS)

    Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning

    2016-02-01

    The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.

  12. Prognostic models and risk scores: can we accurately predict postoperative nausea and vomiting in children after craniotomy?

    PubMed

    Neufeld, Susan M; Newburn-Cook, Christine V; Drummond, Jane E

    2008-10-01

    Postoperative nausea and vomiting (PONV) is a problem for many children after craniotomy. Prognostic models and risk scores help identify who is at risk for an adverse event such as PONV to help guide clinical care. The purpose of this article is to assess whether an existing prognostic model or risk score can predict PONV in children after craniotomy. The concepts of transportability, calibration, and discrimination are presented to identify what is required to have a valid tool for clinical use. Although previous work may inform clinical practice and guide future research, existing prognostic models and risk scores do not appear to be options for predicting PONV in children undergoing craniotomy. However, until risk factors are further delineated, followed by the development and validation of prognostic models and risk scores that include children after craniotomy, clinical judgment in the context of current research may serve as a guide for clinical care in this population. PMID:18939320

  13. How accurately can subject-specific finite element models predict strains and strength of human femora? Investigation using full-field measurements.

    PubMed

    Grassi, Lorenzo; Väänänen, Sami P; Ristinmaa, Matti; Jurvelin, Jukka S; Isaksson, Hanna

    2016-03-21

    Subject-specific finite element models have been proposed as a tool to improve fracture risk assessment in individuals. A thorough laboratory validation against experimental data is required before introducing such models in clinical practice. Results from digital image correlation can provide full-field strain distribution over the specimen surface during in vitro test, instead of at a few pre-defined locations as with strain gauges. The aim of this study was to validate finite element models of human femora against experimental data from three cadaver femora, both in terms of femoral strength and of the full-field strain distribution collected with digital image correlation. The results showed a high accuracy between predicted and measured principal strains (R(2)=0.93, RMSE=10%, 1600 validated data points per specimen). Femoral strength was predicted using a rate dependent material model with specific strain limit values for yield and failure. This provided an accurate prediction (<2% error) for two out of three specimens. In the third specimen, an accidental change in the boundary conditions occurred during the experiment, which compromised the femoral strength validation. The achieved strain accuracy was comparable to that obtained in state-of-the-art studies which validated their prediction accuracy against 10-16 strain gauge measurements. Fracture force was accurately predicted, with the predicted failure location being very close to the experimental fracture rim. Despite the low sample size and the single loading condition tested, the present combined numerical-experimental method showed that finite element models can predict femoral strength by providing a thorough description of the local bone mechanical response. PMID:26944687

  14. An Optimized Method for Accurate Fetal Sex Prediction and Sex Chromosome Aneuploidy Detection in Non-Invasive Prenatal Testing.

    PubMed

    Wang, Ting; He, Quanze; Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong

    2016-01-01

    Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing.

  15. Coronary Computed Tomographic Angiography Does Not Accurately Predict the Need of Coronary Revascularization in Patients with Stable Angina

    PubMed Central

    Hong, Sung-Jin; Her, Ae-Young; Suh, Yongsung; Won, Hoyoun; Cho, Deok-Kyu; Cho, Yun-Hyeong; Yoon, Young-Won; Lee, Kyounghoon; Kang, Woong Chol; Kim, Yong Hoon; Kim, Sang-Wook; Shin, Dong-Ho; Kim, Jung-Sun; Kim, Byeong-Keuk; Ko, Young-Guk; Choi, Byoung-Wook; Choi, Donghoon; Jang, Yangsoo

    2016-01-01

    Purpose To evaluate the ability of coronary computed tomographic angiography (CCTA) to predict the need of coronary revascularization in symptomatic patients with stable angina who were referred to a cardiac catheterization laboratory for coronary revascularization. Materials and Methods Pre-angiography CCTA findings were analyzed in 1846 consecutive symptomatic patients with stable angina, who were referred to a cardiac catheterization laboratory at six hospitals and were potential candidates for coronary revascularization between July 2011 and December 2013. The number of patients requiring revascularization was determined based on the severity of coronary stenosis as assessed by CCTA. This was compared to the actual number of revascularization procedures performed in the cardiac catheterization laboratory. Results Based on CCTA findings, coronary revascularization was indicated in 877 (48%) and not indicated in 969 (52%) patients. Of the 877 patients indicated for revascularization by CCTA, only 600 (68%) underwent the procedure, whereas 285 (29%) of the 969 patients not indicated for revascularization, as assessed by CCTA, underwent the procedure. When the coronary arteries were divided into 15 segments using the American Heart Association coronary tree model, the sensitivity, specificity, positive predictive value, and negative predictive value of CCTA for therapeutic decision making on a per-segment analysis were 42%, 96%, 40%, and 96%, respectively. Conclusion CCTA-based assessment of coronary stenosis severity does not sufficiently differentiate between coronary segments requiring revascularization versus those not requiring revascularization. Conventional coronary angiography should be considered to determine the need of revascularization in symptomatic patients with stable angina. PMID:27401637

  16. An Optimized Method for Accurate Fetal Sex Prediction and Sex Chromosome Aneuploidy Detection in Non-Invasive Prenatal Testing.

    PubMed

    Wang, Ting; He, Quanze; Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong

    2016-01-01

    Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing. PMID:27441628

  17. An Optimized Method for Accurate Fetal Sex Prediction and Sex Chromosome Aneuploidy Detection in Non-Invasive Prenatal Testing

    PubMed Central

    Li, Haibo; Ding, Jie; Wen, Ping; Zhang, Qin; Xiang, Jingjing; Li, Qiong; Xuan, Liming; Kong, Lingyin; Mao, Yan; Zhu, Yijun; Shen, Jingjing; Liang, Bo; Li, Hong

    2016-01-01

    Massively parallel sequencing (MPS) combined with bioinformatic analysis has been widely applied to detect fetal chromosomal aneuploidies such as trisomy 21, 18, 13 and sex chromosome aneuploidies (SCAs) by sequencing cell-free fetal DNA (cffDNA) from maternal plasma, so-called non-invasive prenatal testing (NIPT). However, many technical challenges, such as dependency on correct fetal sex prediction, large variations of chromosome Y measurement and high sensitivity to random reads mapping, may result in higher false negative rate (FNR) and false positive rate (FPR) in fetal sex prediction as well as in SCAs detection. Here, we developed an optimized method to improve the accuracy of the current method by filtering out randomly mapped reads in six specific regions of the Y chromosome. The method reduces the FNR and FPR of fetal sex prediction from nearly 1% to 0.01% and 0.06%, respectively and works robustly under conditions of low fetal DNA concentration (1%) in testing and simulation of 92 samples. The optimized method was further confirmed by large scale testing (1590 samples), suggesting that it is reliable and robust enough for clinical testing. PMID:27441628

  18. A highly accurate protein structural class prediction approach using auto cross covariance transformation and recursive feature elimination.

    PubMed

    Li, Xiaowei; Liu, Taigang; Tao, Peiying; Wang, Chunhua; Chen, Lanming

    2015-12-01

    Structural class characterizes the overall folding type of a protein or its domain. Many methods have been proposed to improve the prediction accuracy of protein structural class in recent years, but it is still a challenge for the low-similarity sequences. In this study, we introduce a feature extraction technique based on auto cross covariance (ACC) transformation of position-specific score matrix (PSSM) to represent a protein sequence. Then support vector machine-recursive feature elimination (SVM-RFE) is adopted to select top K features according to their importance and these features are input to a support vector machine (SVM) to conduct the prediction. Performance evaluation of the proposed method is performed using the jackknife test on three low-similarity datasets, i.e., D640, 1189 and 25PDB. By means of this method, the overall accuracies of 97.2%, 96.2%, and 93.3% are achieved on these three datasets, which are higher than those of most existing methods. This suggests that the proposed method could serve as a very cost-effective tool for predicting protein structural class especially for low-similarity datasets.

  19. A Novel Method for the Prediction of Critical Inclusion Size Leading to Fatigue Failure

    NASA Astrophysics Data System (ADS)

    Saberifar, S.; Mashreghi, A. R.

    2012-06-01

    The fatigue behavior of two commercial 30MnVS6 steels with similar microstructure and mechanical properties containing inclusions of different sizes were studied in the 107 cycles fatigue regime. The scanning electron microscopy (SEM) investigations of the fracture surfaces revealed that the nonmetallic inclusions are the main sources of fatigue crack initiation. Calculated according to the Murakami's model, the stress intensity factors were found to be suitable for the assessment of fatigue behavior. In this article, a new method is proposed for the prediction of the critical inclusion size, using Murakami's model. According to this method, a critical stress intensity factor was determined for the estimation of the critical inclusion size causing the fatigue failure.

  20. A general unified non-equilibrium model for predicting saturated and subcooled critical two-phase flow rates through short and long tubes

    SciTech Connect

    Fraser, D.W.H.; Abdelmessih, A.H.

    1995-09-01

    A general unified model is developed to predict one-component critical two-phase pipe flow. Modelling of the two-phase flow is accomplished by describing the evolution of the flow between the location of flashing inception and the exit (critical) plane. The model approximates the nonequilibrium phase change process via thermodynamic equilibrium paths. Included are the relative effects of varying the location of flashing inception, pipe geometry, fluid properties and length to diameter ratio. The model predicts that a range of critical mass fluxes exist and is bound by a maximum and minimum value for a given thermodynamic state. This range is more pronounced at lower subcooled stagnation states and can be attributed to the variation in the location of flashing inception. The model is based on the results of an experimental study of the critical two-phase flow of saturated and subcooled water through long tubes. In that study, the location of flashing inception was accurately controlled and adjusted through the use of a new device. The data obtained revealed that for fixed stagnation conditions, the maximum critical mass flux occurred with flashing inception located near the pipe exit; while minimum critical mass fluxes occurred with the flashing front located further upstream. Available data since 1970 for both short and long tubes over a wide range of conditions are compared with the model predictions. This includes test section L/D ratios from 25 to 300 and covers a temperature and pressure range of 110 to 280{degrees}C and 0.16 to 6.9 MPa. respectively. The predicted maximum and minimum critical mass fluxes show an excellent agreement with the range observed in the experimental data.

  1. aPPRove: An HMM-Based Method for Accurate Prediction of RNA-Pentatricopeptide Repeat Protein Binding Events.

    PubMed

    Harrison, Thomas; Ruiz, Jaime; Sloan, Daniel B; Ben-Hur, Asa; Boucher, Christina

    2016-01-01

    Pentatricopeptide repeat containing proteins (PPRs) bind to RNA transcripts originating from mitochondria and plastids. There are two classes of PPR proteins. The [Formula: see text] class contains tandem [Formula: see text]-type motif sequences, and the [Formula: see text] class contains alternating [Formula: see text], [Formula: see text] and [Formula: see text] type sequences. In this paper, we describe a novel tool that predicts PPR-RNA interaction; specifically, our method, which we call aPPRove, determines where and how a [Formula: see text]-class PPR protein will bind to RNA when given a PPR and one or more RNA transcripts by using a combinatorial binding code for site specificity proposed by Barkan et al. Our results demonstrate that aPPRove successfully locates how and where a PPR protein belonging to the [Formula: see text] class can bind to RNA. For each binding event it outputs the binding site, the amino-acid-nucleotide interaction, and its statistical significance. Furthermore, we show that our method can be used to predict binding events for [Formula: see text]-class proteins using a known edit site and the statistical significance of aligning the PPR protein to that site. In particular, we use our method to make a conjecture regarding an interaction between CLB19 and the second intronic region of ycf3. The aPPRove web server can be found at www.cs.colostate.edu/~approve. PMID:27560805

  2. A 3D-CFD code for accurate prediction of fluid flows and fluid forces in seals

    NASA Technical Reports Server (NTRS)

    Athavale, M. M.; Przekwas, A. J.; Hendricks, R. C.

    1994-01-01

    Current and future turbomachinery requires advanced seal configurations to control leakage, inhibit mixing of incompatible fluids and to control the rotodynamic response. In recognition of a deficiency in the existing predictive methodology for seals, a seven year effort was established in 1990 by NASA's Office of Aeronautics Exploration and Technology, under the Earth-to-Orbit Propulsion program, to develop validated Computational Fluid Dynamics (CFD) concepts, codes and analyses for seals. The effort will provide NASA and the U.S. Aerospace Industry with advanced CFD scientific codes and industrial codes for analyzing and designing turbomachinery seals. An advanced 3D CFD cylindrical seal code has been developed, incorporating state-of-the-art computational methodology for flow analysis in straight, tapered and stepped seals. Relevant computational features of the code include: stationary/rotating coordinates, cylindrical and general Body Fitted Coordinates (BFC) systems, high order differencing schemes, colocated variable arrangement, advanced turbulence models, incompressible/compressible flows, and moving grids. This paper presents the current status of code development, code demonstration for predicting rotordynamic coefficients, numerical parametric study of entrance loss coefficients for generic annular seals, and plans for code extensions to labyrinth, damping, and other seal configurations.

  3. aPPRove: An HMM-Based Method for Accurate Prediction of RNA-Pentatricopeptide Repeat Protein Binding Events

    PubMed Central

    Harrison, Thomas; Ruiz, Jaime; Sloan, Daniel B.; Ben-Hur, Asa; Boucher, Christina

    2016-01-01

    Pentatricopeptide repeat containing proteins (PPRs) bind to RNA transcripts originating from mitochondria and plastids. There are two classes of PPR proteins. The P class contains tandem P-type motif sequences, and the PLS class contains alternating P, L and S type sequences. In this paper, we describe a novel tool that predicts PPR-RNA interaction; specifically, our method, which we call aPPRove, determines where and how a PLS-class PPR protein will bind to RNA when given a PPR and one or more RNA transcripts by using a combinatorial binding code for site specificity proposed by Barkan et al. Our results demonstrate that aPPRove successfully locates how and where a PPR protein belonging to the PLS class can bind to RNA. For each binding event it outputs the binding site, the amino-acid-nucleotide interaction, and its statistical significance. Furthermore, we show that our method can be used to predict binding events for PLS-class proteins using a known edit site and the statistical significance of aligning the PPR protein to that site. In particular, we use our method to make a conjecture regarding an interaction between CLB19 and the second intronic region of ycf3. The aPPRove web server can be found at www.cs.colostate.edu/~approve. PMID:27560805

  4. IrisPlex: a sensitive DNA tool for accurate prediction of blue and brown eye colour in the absence of ancestry information.

    PubMed

    Walsh, Susan; Liu, Fan; Ballantyne, Kaye N; van Oven, Mannis; Lao, Oscar; Kayser, Manfred

    2011-06-01

    A new era of 'DNA intelligence' is arriving in forensic biology, due to the impending ability to predict externally visible characteristics (EVCs) from biological material such as those found at crime scenes. EVC prediction from forensic samples, or from body parts, is expected to help concentrate police investigations towards finding unknown individuals, at times when conventional DNA profiling fails to provide informative leads. Here we present a robust and sensitive tool, termed IrisPlex, for the accurate prediction of blue and brown eye colour from DNA in future forensic applications. We used the six currently most eye colour-informative single nucleotide polymorphisms (SNPs) that previously revealed prevalence-adjusted prediction accuracies of over 90% for blue and brown eye colour in 6168 Dutch Europeans. The single multiplex assay, based on SNaPshot chemistry and capillary electrophoresis, both widely used in forensic laboratories, displays high levels of genotyping sensitivity with complete profiles generated from as little as 31pg of DNA, approximately six human diploid cell equivalents. We also present a prediction model to correctly classify an individual's eye colour, via probability estimation solely based on DNA data, and illustrate the accuracy of the developed prediction test on 40 individuals from various geographic origins. Moreover, we obtained insights into the worldwide allele distribution of these six SNPs using the HGDP-CEPH samples of 51 populations. Eye colour prediction analyses from HGDP-CEPH samples provide evidence that the test and model presented here perform reliably without prior ancestry information, although future worldwide genotype and phenotype data shall confirm this notion. As our IrisPlex eye colour prediction test is capable of immediate implementation in forensic casework, it represents one of the first steps forward in the creation of a fully individualised EVC prediction system for future use in forensic DNA intelligence.

  5. Accurate ab initio prediction of propagation rate coefficients in free-radical polymerization: Acrylonitrile and vinyl chloride

    NASA Astrophysics Data System (ADS)

    Izgorodina, Ekaterina I.; Coote, Michelle L.

    2006-05-01

    A systematic methodology for calculating accurate propagation rate coefficients in free-radical polymerization was designed and tested for vinyl chloride and acrylonitrile polymerization. For small to medium-sized polymer systems, theoretical reaction barriers are calculated using G3(MP2)-RAD. For larger systems, G3(MP2)-RAD barriers can be approximated (to within 1 kJ mol -1) via an ONIOM-based approach in which the core is studied at G3(MP2)-RAD and the substituent effects are modeled with ROMP2/6-311+G(3df,2p). DFT methods (including BLYP, B3LYP, MPWB195, BB1K and MPWB1K) failed to reproduce the correct trends in the reaction barriers and enthalpies with molecular size, though KMLYP showed some promise as a low cost option for very large systems. Reaction rates are calculated via standard transition state theory in conjunction with the one-dimensional hindered rotor model. The harmonic oscillator approximation was shown to introduce an error of a factor of 2-3, and would be suitable for "order-of-magnitude" estimates. A systematic study of chain length effects indicated that rate coefficients had largely converged to their long chain limit at the dimer radical stage, and the inclusion of the primary substituent of the penultimate unit was sufficient for practical purposes. Solvent effects, as calculated using the COSMO model, were found to be relatively minor. The overall methodology reproduced the available experimental data for both of these monomers within a factor of 2.

  6. Preoperative spirometry before abdominal operations. A critical appraisal of its predictive value.

    PubMed

    Lawrence, V A; Page, C P; Harris, G D

    1989-02-01

    Preoperative spirometry is commonly ordered before abdominal surgery, with the goal of predicting and preventing postoperative pulmonary complications. We assessed the evidence for this practice with a systematic literature search and critical appraisal of published studies. The search identified 135 clinical articles, of which 22 (16%) were actual investigations of the use and predictive value of preoperative spirometry. All 22 studies had important methodological flaws that preclude valid conclusions about the value of screening preoperative spirometry. The available evidence indicates that spirometry's predictive value is unproved. Unanswered questions involve (1) the yield of spirometry, in addition to history and physical examination, in patients with clinically apparent lung disease; (2) spirometry's yield in detecting surgically important occult disease; and (3) its utility, or beneficial effect on patient outcome. Spirometry's full potential for risk assessment in the individual patient has not yet been realized.

  7. Accurate prediction of secreted substrates and identification of a conserved putative secretion signal for type III secretion systems

    SciTech Connect

    Samudrala, Ram; Heffron, Fred; McDermott, Jason E.

    2009-04-24

    The type III secretion system is an essential component for virulence in many Gram-negative bacteria. Though components of the secretion system apparatus are conserved, its substrates, effector proteins, are not. We have used a machine learning approach to identify new secreted effectors. The method integrates evolutionary measures, such as the pattern of homologs in a range of other organisms, and sequence-based features, such as G+C content, amino acid composition and the N-terminal 30 residues of the protein sequence. The method was trained on known effectors from Salmonella typhimurium and validated on a corresponding set of effectors from Pseudomonas syringae, after eliminating effectors with detectable sequence similarity. The method was able to identify all of the known effectors in P. syringae with a specificity of 84% and sensitivity of 82%. The reciprocal validation, training on P. syringae and validating on S. typhimurium, gave similar results with a specificity of 86% when the sensitivity level was 87%. These results show that type III effectors in disparate organisms share common features. We found that maximal performance is attained by including an N-terminal sequence of only 30 residues, which agrees with previous studies indicating that this region contains the secretion signal. We then used the method to define the most important residues in this putative secretion signal. Finally, we present novel predictions of secreted effectors in S. typhimurium, some of which have been experimentally validated, and apply the method to predict secreted effectors in the genetically intractable human pathogen Chlamydia trachomatis. This approach is a novel and effective way to identify secreted effectors in a broad range of pathogenic bacteria for further experimental characterization and provides insight into the nature of the type III secretion signal.

  8. Circulating MicroRNA-150 Serum Levels Predict Survival in Patients with Critical Illness and Sepsis

    PubMed Central

    Vargas Cardenas, David; Vucur, Mihael; Scholten, David; Frey, Norbert; Koch, Alexander; Trautwein, Christian; Tacke, Frank; Luedde, Tom

    2013-01-01

    Background and Aims Down-regulation of miR-150 was recently linked to inflammation and bacterial infection. Furthermore, reduced serum levels of miR-150 were reported from a small cohort of patients with sepsis. We thus aimed at evaluating the diagnostic and prognostic value of miR-150 serum levels in patients with critically illness and sepsis. Methods miR-150 serum levels were analyzed in a cohort of 223 critically ill patients of which 138 fulfilled sepsis criteria and compared to 76 healthy controls. Results were correlated with clinical data and extensive sets of routine and experimental biomarkers. Results Measurements of miR-150 serum concentrations revealed only slightly reduced miR-150 serum levels in critically ill patients compared to healthy controls. Furthermore miR-150 levels did not significantly differ in critically ill patients with our without sepsis, indicating that miR-150 serum levels are not suitable for diagnostic establishment of sepsis. However, serum levels of miR-150 correlated with hepatic or renal dysfunction. Low miR-150 serum levels were associated with an unfavorable prognosis of patients, since low miR-150 serum levels predicted mortality with high diagnostic accuracy compared with established clinical scores and biomarkers. Conclusion Reduced miR-150 serum concentrations are associated with an unfavorable outcome in patients with critical illness, independent of the presence of sepsis. Besides a possible pathogenic role of miR-150 in critical illness, our study indicates a potential use of circulating miRNAs as a prognostic rather than diagnostic marker in critically ill patients. PMID:23372743

  9. Differential scanning calorimetry predicts the critical quality attributes of amorphous glibenclamide.

    PubMed

    Mah, Pei T; Laaksonen, Timo; Rades, Thomas; Peltonen, Leena; Strachan, Clare J

    2015-12-01

    Selection of a crystallinity detection tool that is able to predict the critical quality attributes of amorphous formulations is imperative for the development of process control strategies. The main aim of this study was to determine the crystallinity detection tool that best predicts the critical quality attributes (i.e. physical stability and dissolution behaviour) of amorphous material. Glibenclamide (model drug) was milled for various durations using a planetary mill and characterised using Raman spectroscopy and differential scanning calorimetry (DSC). Physical stability studies upon storage at 60°C/0% RH and dissolution studies (non-sink conditions) were performed on the milled glibenclamide samples. Different milling durations were needed to render glibenclamide fully amorphous according to Raman spectroscopy (60min) and onset of crystallisation using DSC (150min). This could be due to the superiority of DSC (onset of crystallisation) in detecting residual crystallinity in the samples milled for between 60 and 120min, which were not detectable with Raman spectroscopy. The physical stability upon storage and dissolution behaviour of the milled samples improved with increased milling duration and plateaus were reached after milling for certain periods of time (physical stability - 150min; dissolution - 120min). The residual crystallinity which was detectable with DSC (onset of crystallisation), but not with Raman spectroscopy, adversely affected the critical quality attributes of milled glibenclamide samples. In addition, mathematical simulations were performed on the dissolution data to determine the solubility advantages of the milled glibenclamide samples and to describe the crystallisation process that occurred during dissolution in pH7.4 phosphate buffer. In conclusion, the onset of crystallisation obtained from DSC measurements best predicts the critical quality attributes of milled glibenclamide samples and mathematical simulations based on the

  10. Differential scanning calorimetry predicts the critical quality attributes of amorphous glibenclamide.

    PubMed

    Mah, Pei T; Laaksonen, Timo; Rades, Thomas; Peltonen, Leena; Strachan, Clare J

    2015-12-01

    Selection of a crystallinity detection tool that is able to predict the critical quality attributes of amorphous formulations is imperative for the development of process control strategies. The main aim of this study was to determine the crystallinity detection tool that best predicts the critical quality attributes (i.e. physical stability and dissolution behaviour) of amorphous material. Glibenclamide (model drug) was milled for various durations using a planetary mill and characterised using Raman spectroscopy and differential scanning calorimetry (DSC). Physical stability studies upon storage at 60°C/0% RH and dissolution studies (non-sink conditions) were performed on the milled glibenclamide samples. Different milling durations were needed to render glibenclamide fully amorphous according to Raman spectroscopy (60min) and onset of crystallisation using DSC (150min). This could be due to the superiority of DSC (onset of crystallisation) in detecting residual crystallinity in the samples milled for between 60 and 120min, which were not detectable with Raman spectroscopy. The physical stability upon storage and dissolution behaviour of the milled samples improved with increased milling duration and plateaus were reached after milling for certain periods of time (physical stability - 150min; dissolution - 120min). The residual crystallinity which was detectable with DSC (onset of crystallisation), but not with Raman spectroscopy, adversely affected the critical quality attributes of milled glibenclamide samples. In addition, mathematical simulations were performed on the dissolution data to determine the solubility advantages of the milled glibenclamide samples and to describe the crystallisation process that occurred during dissolution in pH7.4 phosphate buffer. In conclusion, the onset of crystallisation obtained from DSC measurements best predicts the critical quality attributes of milled glibenclamide samples and mathematical simulations based on the

  11. Automatic Earthquake Shear Stress Measurement Method Developed for Accurate Time- Prediction Analysis of Forthcoming Major Earthquakes Along Shallow Active Faults

    NASA Astrophysics Data System (ADS)

    Serata, S.

    2006-12-01

    The Serata Stressmeter has been developed to measure and monitor earthquake shear stress build-up along shallow active faults. The development work made in the past 25 years has established the Stressmeter as an automatic stress measurement system to study timing of forthcoming major earthquakes in support of the current earthquake prediction studies based on statistical analysis of seismological observations. In early 1982, a series of major Man-made earthquakes (magnitude 4.5-5.0) suddenly occurred in an area over deep underground potash mine in Saskatchewan, Canada. By measuring underground stress condition of the mine, the direct cause of the earthquake was disclosed. The cause was successfully eliminated by controlling the stress condition of the mine. The Japanese government was interested in this development and the Stressmeter was introduced to the Japanese government research program for earthquake stress studies. In Japan the Stressmeter was first utilized for direct measurement of the intrinsic lateral tectonic stress gradient G. The measurement, conducted at the Mt. Fuji Underground Research Center of the Japanese government, disclosed the constant natural gradients of maximum and minimum lateral stresses in an excellent agreement with the theoretical value, i.e., G = 0.25. All the conventional methods of overcoring, hydrofracturing and deformation, which were introduced to compete with the Serata method, failed demonstrating the fundamental difficulties of the conventional methods. The intrinsic lateral stress gradient determined by the Stressmeter for the Japanese government was found to be the same with all the other measurements made by the Stressmeter in Japan. The stress measurement results obtained by the major international stress measurement work in the Hot Dry Rock Projects conducted in USA, England and Germany are found to be in good agreement with the Stressmeter results obtained in Japan. Based on this broad agreement, a solid geomechanical

  12. Predicting College Students' First Year Success: Should Soft Skills Be Taken into Consideration to More Accurately Predict the Academic Achievement of College Freshmen?

    ERIC Educational Resources Information Center

    Powell, Erica Dion

    2013-01-01

    This study presents a survey developed to measure the skills of entering college freshmen in the areas of responsibility, motivation, study habits, literacy, and stress management, and explores the predictive power of this survey as a measure of academic performance during the first semester of college. The survey was completed by 334 incoming…

  13. Predicting Antimicrobial Resistance Prevalence and Incidence from Indicators of Antimicrobial Use: What Is the Most Accurate Indicator for Surveillance in Intensive Care Units?

    PubMed Central

    Fortin, Élise; Platt, Robert W.; Fontela, Patricia S.; Buckeridge, David L.; Quach, Caroline

    2015-01-01

    Objective The optimal way to measure antimicrobial use in hospital populations, as a complement to surveillance of resistance is still unclear. Using respiratory isolates and antimicrobial prescriptions of nine intensive care units (ICUs), this study aimed to identify the indicator of antimicrobial use that predicted prevalence and incidence rates of resistance with the best accuracy. Methods Retrospective cohort study including all patients admitted to three neonatal (NICU), two pediatric (PICU) and four adult ICUs between April 2006 and March 2010. Ten different resistance / antimicrobial use combinations were studied. After adjustment for ICU type, indicators of antimicrobial use were successively tested in regression models, to predict resistance prevalence and incidence rates, per 4-week time period, per ICU. Binomial regression and Poisson regression were used to model prevalence and incidence rates, respectively. Multiplicative and additive models were tested, as well as no time lag and a one 4-week-period time lag. For each model, the mean absolute error (MAE) in prediction of resistance was computed. The most accurate indicator was compared to other indicators using t-tests. Results Results for all indicators were equivalent, except for 1/20 scenarios studied. In this scenario, where prevalence of carbapenem-resistant Pseudomonas sp. was predicted with carbapenem use, recommended daily doses per 100 admissions were less accurate than courses per 100 patient-days (p = 0.0006). Conclusions A single best indicator to predict antimicrobial resistance might not exist. Feasibility considerations such as ease of computation or potential external comparisons could be decisive in the choice of an indicator for surveillance of healthcare antimicrobial use. PMID:26710322

  14. Microdosing of a Carbon-14 Labeled Protein in Healthy Volunteers Accurately Predicts Its Pharmacokinetics at Therapeutic Dosages.

    PubMed

    Vlaming, M L H; van Duijn, E; Dillingh, M R; Brands, R; Windhorst, A D; Hendrikse, N H; Bosgra, S; Burggraaf, J; de Koning, M C; Fidder, A; Mocking, J A J; Sandman, H; de Ligt, R A F; Fabriek, B O; Pasman, W J; Seinen, W; Alves, T; Carrondo, M; Peixoto, C; Peeters, P A M; Vaes, W H J

    2015-08-01

    Preclinical development of new biological entities (NBEs), such as human protein therapeutics, requires considerable expenditure of time and costs. Poor prediction of pharmacokinetics in humans further reduces net efficiency. In this study, we show for the first time that pharmacokinetic data of NBEs in humans can be successfully obtained early in the drug development process by the use of microdosing in a small group of healthy subjects combined with ultrasensitive accelerator mass spectrometry (AMS). After only minimal preclinical testing, we performed a first-in-human phase 0/phase 1 trial with a human recombinant therapeutic protein (RESCuing Alkaline Phosphatase, human recombinant placental alkaline phosphatase [hRESCAP]) to assess its safety and kinetics. Pharmacokinetic analysis showed dose linearity from microdose (53 μg) [(14) C]-hRESCAP to therapeutic doses (up to 5.3 mg) of the protein in healthy volunteers. This study demonstrates the value of a microdosing approach in a very small cohort for accelerating the clinical development of NBEs. PMID:25869840

  15. A new accurate ground-state potential energy surface of ethylene and predictions for rotational and vibrational energy levels

    NASA Astrophysics Data System (ADS)

    Delahaye, Thibault; Nikitin, Andrei; Rey, Michaël; Szalay, Péter G.; Tyuterev, Vladimir G.

    2014-09-01

    In this paper we report a new ground state potential energy surface for ethylene (ethene) C2H4 obtained from extended ab initio calculations. The coupled-cluster approach with the perturbative inclusion of the connected triple excitations CCSD(T) and correlation consistent polarized valence basis set cc-pVQZ was employed for computations of electronic ground state energies. The fit of the surface included 82 542 nuclear configurations using sixth order expansion in curvilinear symmetry-adapted coordinates involving 2236 parameters. A good convergence for variationally computed vibrational levels of the C2H4 molecule was obtained with a RMS(Obs.-Calc.) deviation of 2.7 cm-1 for fundamental bands centers and 5.9 cm-1 for vibrational bands up to 7800 cm-1. Large scale vibrational and rotational calculations for 12C2H4, 13C2H4, and 12C2D4 isotopologues were performed using this new surface. Energy levels for J = 20 up to 6000 cm-1 are in a good agreement with observations. This represents a considerable improvement with respect to available global predictions of vibrational levels of 13C2H4 and 12C2D4 and rovibrational levels of 12C2H4.

  16. Accurate Predictions of Mean Geomagnetic Dipole Excursion and Reversal Frequencies, Mean Paleomagnetic Field Intensity, and the Radius of Earth's Core Using McLeod's Rule

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.; Conrad, Joy

    1996-01-01

    The geomagnetic spatial power spectrum R(sub n)(r) is the mean square magnetic induction represented by degree n spherical harmonic coefficients of the internal scalar potential averaged over the geocentric sphere of radius r. McLeod's Rule for the magnetic field generated by Earth's core geodynamo says that the expected core surface power spectrum (R(sub nc)(c)) is inversely proportional to (2n + 1) for 1 less than n less than or equal to N(sub E). McLeod's Rule is verified by locating Earth's core with main field models of Magsat data; the estimated core radius of 3485 kn is close to the seismologic value for c of 3480 km. McLeod's Rule and similar forms are then calibrated with the model values of R(sub n) for 3 less than or = n less than or = 12. Extrapolation to the degree 1 dipole predicts the expectation value of Earth's dipole moment to be about 5.89 x 10(exp 22) Am(exp 2)rms (74.5% of the 1980 value) and the expected geomagnetic intensity to be about 35.6 (mu)T rms at Earth's surface. Archeo- and paleomagnetic field intensity data show these and related predictions to be reasonably accurate. The probability distribution chi(exp 2) with 2n+1 degrees of freedom is assigned to (2n + 1)R(sub nc)/(R(sub nc). Extending this to the dipole implies that an exceptionally weak absolute dipole moment (less than or = 20% of the 1980 value) will exist during 2.5% of geologic time. The mean duration for such major geomagnetic dipole power excursions, one quarter of which feature durable axial dipole reversal, is estimated from the modern dipole power time-scale and the statistical model of excursions. The resulting mean excursion duration of 2767 years forces us to predict an average of 9.04 excursions per million years, 2.26 axial dipole reversals per million years, and a mean reversal duration of 5533 years. Paleomagnetic data show these predictions to be quite accurate. McLeod's Rule led to accurate predictions of Earth's core radius, mean paleomagnetic field

  17. Integrating metabolic performance, thermal tolerance, and plasticity enables for more accurate predictions on species vulnerability to acute and chronic effects of global warming.

    PubMed

    Magozzi, Sarah; Calosi, Piero

    2015-01-01

    Predicting species vulnerability to global warming requires a comprehensive, mechanistic understanding of sublethal and lethal thermal tolerances. To date, however, most studies investigating species physiological responses to increasing temperature have focused on the underlying physiological traits of either acute or chronic tolerance in isolation. Here we propose an integrative, synthetic approach including the investigation of multiple physiological traits (metabolic performance and thermal tolerance), and their plasticity, to provide more accurate and balanced predictions on species and assemblage vulnerability to both acute and chronic effects of global warming. We applied this approach to more accurately elucidate relative species vulnerability to warming within an assemblage of six caridean prawns occurring in the same geographic, hence macroclimatic, region, but living in different thermal habitats. Prawns were exposed to four incubation temperatures (10, 15, 20 and 25 °C) for 7 days, their metabolic rates and upper thermal limits were measured, and plasticity was calculated according to the concept of Reaction Norms, as well as Q10 for metabolism. Compared to species occupying narrower/more stable thermal niches, species inhabiting broader/more variable thermal environments (including the invasive Palaemon macrodactylus) are likely to be less vulnerable to extreme acute thermal events as a result of their higher upper thermal limits. Nevertheless, they may be at greater risk from chronic exposure to warming due to the greater metabolic costs they incur. Indeed, a trade-off between acute and chronic tolerance was apparent in the assemblage investigated. However, the invasive species P. macrodactylus represents an exception to this pattern, showing elevated thermal limits and plasticity of these limits, as well as a high metabolic control. In general, integrating multiple proxies for species physiological acute and chronic responses to increasing

  18. Predicting critical temperatures of ionic and non-ionic fluids from thermophysical data obtained near the melting point

    NASA Astrophysics Data System (ADS)

    Weiss, Volker C.

    2015-10-01

    In the correlation and prediction of thermophysical data of fluids based on a corresponding-states approach, the critical temperature Tc plays a central role. For some fluids, in particular ionic ones, however, the critical region is difficult or even impossible to access experimentally. For molten salts, Tc is on the order of 3000 K, which makes accurate measurements a challenging task. Room temperature ionic liquids (RTILs) decompose thermally between 400 K and 600 K due to their organic constituents; this range of temperatures is hundreds of degrees below recent estimates of their Tc. In both cases, reliable methods to deduce Tc based on extrapolations of experimental data recorded at much lower temperatures near the triple or melting points are needed and useful because the critical point influences the fluid's behavior in the entire liquid region. Here, we propose to employ the scaling approach leading to universal fluid behavior [Román et al., J. Chem. Phys. 123, 124512 (2005)] to derive a very simple expression that allows one to estimate Tc from the density of the liquid, the surface tension, or the enthalpy of vaporization measured in a very narrow range of low temperatures. We demonstrate the validity of the approach for simple and polar neutral fluids, for which Tc is known, and then use the methodology to obtain estimates of Tc for ionic fluids. When comparing these estimates to those reported in the literature, good agreement is found for RTILs, whereas the ones for the molten salts NaCl and KCl are lower than previous estimates by 10%. The coexistence curve for ionic fluids is found to be more adequately described by an effective exponent of βeff = 0.5 than by βeff = 0.33.

  19. Predicting critical temperatures of ionic and non-ionic fluids from thermophysical data obtained near the melting point.

    PubMed

    Weiss, Volker C

    2015-10-14

    In the correlation and prediction of thermophysical data of fluids based on a corresponding-states approach, the critical temperature Tc plays a central role. For some fluids, in particular ionic ones, however, the critical region is difficult or even impossible to access experimentally. For molten salts, Tc is on the order of 3000 K, which makes accurate measurements a challenging task. Room temperature ionic liquids (RTILs) decompose thermally between 400 K and 600 K due to their organic constituents; this range of temperatures is hundreds of degrees below recent estimates of their Tc. In both cases, reliable methods to deduce Tc based on extrapolations of experimental data recorded at much lower temperatures near the triple or melting points are needed and useful because the critical point influences the fluid's behavior in the entire liquid region. Here, we propose to employ the scaling approach leading to universal fluid behavior [Román et al., J. Chem. Phys. 123, 124512 (2005)] to derive a very simple expression that allows one to estimate Tc from the density of the liquid, the surface tension, or the enthalpy of vaporization measured in a very narrow range of low temperatures. We demonstrate the validity of the approach for simple and polar neutral fluids, for which Tc is known, and then use the methodology to obtain estimates of Tc for ionic fluids. When comparing these estimates to those reported in the literature, good agreement is found for RTILs, whereas the ones for the molten salts NaCl and KCl are lower than previous estimates by 10%. The coexistence curve for ionic fluids is found to be more adequately described by an effective exponent of βeff = 0.5 than by βeff = 0.33.

  20. Infectious titres of sheep scrapie and bovine spongiform encephalopathy agents cannot be accurately predicted from quantitative laboratory test results.

    PubMed

    González, Lorenzo; Thorne, Leigh; Jeffrey, Martin; Martin, Stuart; Spiropoulos, John; Beck, Katy E; Lockey, Richard W; Vickery, Christopher M; Holder, Thomas; Terry, Linda

    2012-11-01

    It is widely accepted that abnormal forms of the prion protein (PrP) are the best surrogate marker for the infectious agent of prion diseases and, in practice, the detection of such disease-associated (PrP(d)) and/or protease-resistant (PrP(res)) forms of PrP is the cornerstone of diagnosis and surveillance of the transmissible spongiform encephalopathies (TSEs). Nevertheless, some studies question the consistent association between infectivity and abnormal PrP detection. To address this discrepancy, 11 brain samples of sheep affected with natural scrapie or experimental bovine spongiform encephalopathy were selected on the basis of the magnitude and predominant types of PrP(d) accumulation, as shown by immunohistochemical (IHC) examination; contra-lateral hemi-brain samples were inoculated at three different dilutions into transgenic mice overexpressing ovine PrP and were also subjected to quantitative analysis by three biochemical tests (BCTs). Six samples gave 'low' infectious titres (10⁶·⁵ to 10⁶·⁷ LD₅₀ g⁻¹) and five gave 'high titres' (10⁸·¹ to ≥ 10⁸·⁷ LD₅₀ g⁻¹) and, with the exception of the Western blot analysis, those two groups tended to correspond with samples with lower PrP(d)/PrP(res) results by IHC/BCTs. However, no statistical association could be confirmed due to high individual sample variability. It is concluded that although detection of abnormal forms of PrP by laboratory methods remains useful to confirm TSE infection, infectivity titres cannot be predicted from quantitative test results, at least for the TSE sources and host PRNP genotypes used in this study. Furthermore, the near inverse correlation between infectious titres and Western blot results (high protease pre-treatment) argues for a dissociation between infectivity and PrP(res).

  1. A new accurate ground-state potential energy surface of ethylene and predictions for rotational and vibrational energy levels

    SciTech Connect

    Delahaye, Thibault Rey, Michaël Tyuterev, Vladimir G.; Nikitin, Andrei; Szalay, Péter G.

    2014-09-14

    In this paper we report a new ground state potential energy surface for ethylene (ethene) C{sub 2}H{sub 4} obtained from extended ab initio calculations. The coupled-cluster approach with the perturbative inclusion of the connected triple excitations CCSD(T) and correlation consistent polarized valence basis set cc-pVQZ was employed for computations of electronic ground state energies. The fit of the surface included 82 542 nuclear configurations using sixth order expansion in curvilinear symmetry-adapted coordinates involving 2236 parameters. A good convergence for variationally computed vibrational levels of the C{sub 2}H{sub 4} molecule was obtained with a RMS(Obs.–Calc.) deviation of 2.7 cm{sup −1} for fundamental bands centers and 5.9 cm{sup −1} for vibrational bands up to 7800 cm{sup −1}. Large scale vibrational and rotational calculations for {sup 12}C{sub 2}H{sub 4}, {sup 13}C{sub 2}H{sub 4}, and {sup 12}C{sub 2}D{sub 4} isotopologues were performed using this new surface. Energy levels for J = 20 up to 6000 cm{sup −1} are in a good agreement with observations. This represents a considerable improvement with respect to available global predictions of vibrational levels of {sup 13}C{sub 2}H{sub 4} and {sup 12}C{sub 2}D{sub 4} and rovibrational levels of {sup 12}C{sub 2}H{sub 4}.

  2. Prediction of aqueous solubility, vapor pressure and critical micelle concentration for aquatic partitioning of perfluorinated chemicals.

    PubMed

    Bhhatarai, Barun; Gramatica, Paola

    2011-10-01

    The majority of perfluorinated chemicals (PFCs) are of increasing risk to biota and environment due to their physicochemical stability, wide transport in the environment and difficulty in biodegradation. It is necessary to identify and prioritize these harmful PFCs and to characterize their physicochemical properties that govern the solubility, distribution and fate of these chemicals in an aquatic ecosystem. Therefore, available experimental data (10-35 compounds) of three important properties: aqueous solubility (AqS), vapor pressure (VP) and critical micelle concentration (CMC) on per- and polyfluorinated compounds were collected for quantitative structure-property relationship (QSPR) modeling. Simple and robust models based on theoretical molecular descriptors were developed and externally validated for predictivity. Model predictions on selected PFCs were compared with available experimental data and other published in silico predictions. The structural applicability domains (AD) of the models were verified on a bigger data set of 221 compounds. The predicted properties of the chemicals that are within the AD, are reliable, and they help to reduce the wide data gap that exists. Moreover, the predictions of AqS, VP, and CMC of most common PFCs were evaluated to understand the aquatic partitioning and to derive a relation with the available experimental data of bioconcentration factor (BCF).

  3. Data-driven prediction of thresholded time series of rainfall and self-organized criticality models

    NASA Astrophysics Data System (ADS)

    Deluca, Anna; Moloney, Nicholas R.; Corral, Álvaro

    2015-05-01

    We study the occurrence of events, subject to threshold, in a representative self-organized criticality (SOC) sandpile model and in high-resolution rainfall data. The predictability in both systems is analyzed by means of a decision variable sensitive to event clustering, and the quality of the predictions is evaluated by the receiver operating characteristic (ROC) method. In the case of the SOC sandpile model, the scaling of quiet-time distributions with increasing threshold leads to increased predictability of extreme events. A scaling theory allows us to understand all the details of the prediction procedure and to extrapolate the shape of the ROC curves for the most extreme events. For rainfall data, the quiet-time distributions do not scale for high thresholds, which means that the corresponding ROC curves cannot be straightforwardly related to those for lower thresholds. In this way, ROC curves are useful for highlighting differences in predictability of extreme events between toy models and real-world phenomena.

  4. Noncontrast computed tomography can predict the outcome of shockwave lithotripsy via accurate stone measurement and abdominal fat distribution determination.

    PubMed

    Geng, Jiun-Hung; Tu, Hung-Pin; Shih, Paul Ming-Chen; Shen, Jung-Tsung; Jang, Mei-Yu; Wu, Wen-Jen; Li, Ching-Chia; Chou, Yii-Her; Juan, Yung-Shun

    2015-01-01

    Urolithiasis is a common disease of the urinary system. Extracorporeal shockwave lithotripsy (SWL) has become one of the standard treatments for renal and ureteral stones; however, the success rates range widely and failure of stone disintegration may cause additional outlay, alternative procedures, and even complications. We used the data available from noncontrast abdominal computed tomography (NCCT) to evaluate the impact of stone parameters and abdominal fat distribution on calculus-free rates following SWL. We retrospectively reviewed 328 patients who had urinary stones and had undergone SWL from August 2012 to August 2013. All of them received pre-SWL NCCT; 1 month after SWL, radiography was arranged to evaluate the condition of the fragments. These patients were classified into stone-free group and residual stone group. Unenhanced computed tomography variables, including stone attenuation, abdominal fat area, and skin-to-stone distance (SSD) were analyzed. In all, 197 (60%) were classified as stone-free and 132 (40%) as having residual stone. The mean ages were 49.35 ± 13.22 years and 55.32 ± 13.52 years, respectively. On univariate analysis, age, stone size, stone surface area, stone attenuation, SSD, total fat area (TFA), abdominal circumference, serum creatinine, and the severity of hydronephrosis revealed statistical significance between these two groups. From multivariate logistic regression analysis, the independent parameters impacting SWL outcomes were stone size, stone attenuation, TFA, and serum creatinine. [Adjusted odds ratios and (95% confidence intervals): 9.49 (3.72-24.20), 2.25 (1.22-4.14), 2.20 (1.10-4.40), and 2.89 (1.35-6.21) respectively, all p < 0.05]. In the present study, stone size, stone attenuation, TFA and serum creatinine were four independent predictors for stone-free rates after SWL. These findings suggest that pretreatment NCCT may predict the outcomes after SWL. Consequently, we can use these predictors for selecting

  5. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    PubMed

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research. PMID:26731286

  6. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform.

    PubMed

    Van Poucke, Sven; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; De Deyne, Cathy

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner's Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research.

  7. Scalable Predictive Analysis in Critically Ill Patients Using a Visual Open Data Analysis Platform

    PubMed Central

    Poucke, Sven Van; Zhang, Zhongheng; Schmitz, Martin; Vukicevic, Milan; Laenen, Margot Vander; Celi, Leo Anthony; Deyne, Cathy De

    2016-01-01

    With the accumulation of large amounts of health related data, predictive analytics could stimulate the transformation of reactive medicine towards Predictive, Preventive and Personalized (PPPM) Medicine, ultimately affecting both cost and quality of care. However, high-dimensionality and high-complexity of the data involved, prevents data-driven methods from easy translation into clinically relevant models. Additionally, the application of cutting edge predictive methods and data manipulation require substantial programming skills, limiting its direct exploitation by medical domain experts. This leaves a gap between potential and actual data usage. In this study, the authors address this problem by focusing on open, visual environments, suited to be applied by the medical community. Moreover, we review code free applications of big data technologies. As a showcase, a framework was developed for the meaningful use of data from critical care patients by integrating the MIMIC-II database in a data mining environment (RapidMiner) supporting scalable predictive analytics using visual tools (RapidMiner’s Radoop extension). Guided by the CRoss-Industry Standard Process for Data Mining (CRISP-DM), the ETL process (Extract, Transform, Load) was initiated by retrieving data from the MIMIC-II tables of interest. As use case, correlation of platelet count and ICU survival was quantitatively assessed. Using visual tools for ETL on Hadoop and predictive modeling in RapidMiner, we developed robust processes for automatic building, parameter optimization and evaluation of various predictive models, under different feature selection schemes. Because these processes can be easily adopted in other projects, this environment is attractive for scalable predictive analytics in health research. PMID:26731286

  8. Comparison of different approaches to predict the spatial distributed of critical source areas to manage the water quality on the catchment scale

    NASA Astrophysics Data System (ADS)

    Frey, M.; David, T.; Juwe, A.-L.; Reichert, P.; Stamm, C.

    2009-04-01

    Diffuse losses of agrochemicals from agricultural fields to surface water are in general limited to certain areas in a catchment prone to fast flow processes, also called critical source areas (CSA) or hydrologically sensitive areas (HSA). Effective mitigation strategies to reduce those losses rely on an accurate identification of those CSA/HSA. Different approaches to identify such areas are available. To compare them, we applied six approaches to the same small agricultural catchment in Switzerland, where spatial data on herbicide losses are available. The investigated approaches are a risk map integrated in the local soil map, an approach to delineate the Dominant Runoff Processes (DRP), an adaptation of the classification schema of HOST (Hydrology Of Soil Types), a regression model to predict the spatial distribution of the Fast Flow Index (FFI), the topographic wetness index (l) and the continuous physically-based water balance Soil Moisture Distribution and Routing model (SMDR). Despite their conceptual difference the spatial agreement in the prediction of risk classes is surprisingly high given the fact that not all approaches use the same input data. The risk map, DRP, HOST and FFI approaches are all based on the local soil map. In contrast, the l and the SMDR approaches are primarily based on the digital elevation model. This observation indicates that topography reflects important aspects of the soil distribution in this landscape. A comparison with observed spatial variability of herbicide losses revealed that all approaches fail to accurately predict the variability if the surface connectivity is not accurately considered.

  9. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  10. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    PubMed Central

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  11. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  12. Prediction of heat transfer of nanofluid on critical heat flux based on fractal geometry

    NASA Astrophysics Data System (ADS)

    Xiao, Bo-Qi

    2013-01-01

    Analytical expressions for nucleate pool boiling heat transfer of nanofluid in the critical heat flux (CHF) region are derived taking into account the effect of nanoparticles moving in liquid based on the fractal geometry theory. The proposed fractal model for the CHF of nanofluid is explicitly related to the average diameter of the nanoparticles, the volumetric nanoparticle concentration, the thermal conductivity of nanoparticles, the fractal dimension of nanoparticles, the fractal dimension of active cavities on the heated surfaces, the temperature, and the properties of the fluid. It is found that the CHF of nanofluid decreases with the increase of the average diameter of nanoparticles. Each parameter of the proposed formulas on CHF has a clear physical meaning. The model predictions are compared with the existing experimental data, and a good agreement between the model predictions and experimental data is found. The validity of the present model is thus verified. The proposed fractal model can reveal the mechanism of heat transfer in nanofluid.

  13. Profile-QSAR: a novel meta-QSAR method that combines activities across the kinase family to accurately predict affinity, selectivity, and cellular activity.

    PubMed

    Martin, Eric; Mukherjee, Prasenjit; Sullivan, David; Jansen, Johanna

    2011-08-22

    Profile-QSAR is a novel 2D predictive model building method for kinases. This "meta-QSAR" method models the activity of each compound against a new kinase target as a linear combination of its predicted activities against a large panel of 92 previously studied kinases comprised from 115 assays. Profile-QSAR starts with a sparse incomplete kinase by compound (KxC) activity matrix, used to generate Bayesian QSAR models for the 92 "basis-set" kinases. These Bayesian QSARs generate a complete "synthetic" KxC activity matrix of predictions. These synthetic activities are used as "chemical descriptors" to train partial-least squares (PLS) models, from modest amounts of medium-throughput screening data, for predicting activity against new kinases. The Profile-QSAR predictions for the 92 kinases (115 assays) gave a median external R²(ext) = 0.59 on 25% held-out test sets. The method has proven accurate enough to predict pairwise kinase selectivities with a median correlation of R²(ext) = 0.61 for 958 kinase pairs with at least 600 common compounds. It has been further expanded by adding a "C(k)XC" cellular activity matrix to the KxC matrix to predict cellular activity for 42 kinase driven cellular assays with median R²(ext) = 0.58 for 24 target modulation assays and R²(ext) = 0.41 for 18 cell proliferation assays. The 2D Profile-QSAR, along with the 3D Surrogate AutoShim, are the foundations of an internally developed iterative medium-throughput screening (IMTS) methodology for virtual screening (VS) of compound archives as an alternative to experimental high-throughput screening (HTS). The method has been applied to 20 actual prospective kinase projects. Biological results have so far been obtained in eight of them. Q² values ranged from 0.3 to 0.7. Hit-rates at 10 uM for experimentally tested compounds varied from 25% to 80%, except in K5, which was a special case aimed specifically at finding "type II" binders, where none of the compounds were predicted to be

  14. Development of uncertainty methodology for COBRA-TF void distribution and critical power predictions

    NASA Astrophysics Data System (ADS)

    Aydogan, Fatih

    Thermal hydraulic codes are commonly used tools in licensing processes for the evaluation of various thermal hydraulic scenarios. The uncertainty of a thermal hydraulic code prediction is calculated with uncertainty analyses. The objective of all the uncertainty analysis is to determine how well a code predicts with corresponding uncertainties. If a code has a big output uncertainty, this code needs further development and/or model improvements. If a code has a small uncertainty, this code needs maintenance program in order to keep this small output uncertainty. Uncertainty analysis also indicates the more validation data is needed. Uncertainty analyses for the BWR nominal steady state and transient scenarios are necessary in order to develop and improve the two phase flow models in the thermal hydraulic codes. Because void distribution is the key factor in order to determine the flow regime and heat transfer regime of the flow and critical power is an important factor for the safety margin, both steady state void distribution and critical power predictions are important features of a code. An uncertainty analysis for these two phenomena/cases provides valuable results. These results can be used for the development of the thermal hydraulic codes that are used for designing a BWR bundle or for licensing procedures. This dissertation includes the development of a particular uncertainty methodology for the steady state void distribution and critical power predictions. In this methodology, the PIRT element of CSAU was used to eliminate the low ranked uncertainty parameters. The SPDF element of GRS was utilized to make the uncertainty methodology flexible for the assignment of PDFs to the uncertainty parameters. The developed methodology includes the uncertainty comparison methods to assess the code precision with the sample-averaged bias, to assess the code spreading with the sample-averaged standard deviation and to assess the code reliability with the proportion of

  15. Hsp72 Is a Novel Biomarker to Predict Acute Kidney Injury in Critically Ill Patients

    PubMed Central

    Morales-Buenrostro, Luis E.; Salas-Nolasco, Omar I.; Barrera-Chimal, Jonatan; Casas-Aparicio, Gustavo; Irizar-Santana, Sergio; Pérez-Villalva, Rosalba; Bobadilla, Norma A.

    2014-01-01

    Background and Objectives Acute kidney injury (AKI) complicates the course of disease in critically ill patients. Efforts to change its clinical course have failed because of the fail in the early detection. This study was designed to assess whether heat shock protein (Hsp72) is an early and sensitive biomarker of acute kidney injury (AKI) compared with kidney injury molecule (Kim-1), neutrophil gelatinase-associated lipocalin (NGAL), and interleukin-18 (IL-18) biomarkers. Methods A total of 56 critically ill patients fulfilled the inclusion criteria. From these patients, 17 developed AKI and 20 were selected as controls. In AKI patients, Kim-1, IL-18, NGAL, and Hsp72 were measured from 3 days before and until 2 days after the AKI diagnosis and in no-AKI patients at 1, 5 and 10 days after admission. Biomarker sensitivity and specificity were determined. To validate the results obtained with ROC curves for Hsp72, a new set of critically ill patients was included, 10 with AKI and 12 with no-AKI patients. Results Urinary Hsp72 levels rose since 3 days before the AKI diagnosis in critically ill patients; this early increase was not seen with any other tested biomarkers. Kim-1, IL-18, NGAL, and Hsp72 significantly increased from 2 days before AKI and remained elevated during the AKI diagnosis. The best sensitivity/specificity was observed in Kim-1 and Hsp72: 83/95% and 100/90%, respectively, whereas 1 day before the AKI diagnosis, the values were 100/100% and 100/90%, respectively. The sensibility, specificity and accuracy in the validation test for Hsp72 were 100%, 83.3% and 90.9%, respectively. Conclusions The biomarker Hsp72 is enough sensitive and specific to predict AKI in critically ill patients up to 3 days before the diagnosis. PMID:25313566

  16. Unprecedently Large-Scale Kinase Inhibitor Set Enabling the Accurate Prediction of Compound-Kinase Activities: A Way toward Selective Promiscuity by Design?

    PubMed

    Christmann-Franck, Serge; van Westen, Gerard J P; Papadatos, George; Beltran Escudie, Fanny; Roberts, Alexander; Overington, John P; Domine, Daniel

    2016-09-26

    Drug discovery programs frequently target members of the human kinome and try to identify small molecule protein kinase inhibitors, primarily for cancer treatment, additional indications being increasingly investigated. One of the challenges is controlling the inhibitors degree of selectivity, assessed by in vitro profiling against panels of protein kinases. We manually extracted, compiled, and standardized such profiles published in the literature: we collected 356 908 data points corresponding to 482 protein kinases, 2106 inhibitors, and 661 patents. We then analyzed this data set in terms of kinome coverage, results reproducibility, popularity, and degree of selectivity of both kinases and inhibitors. We used the data set to create robust proteochemometric models capable of predicting kinase activity (the ligand-target space was modeled with an externally validated RMSE of 0.41 ± 0.02 log units and R02 0.74 ± 0.03), in order to account for missing or unreliable measurements. The influence on the prediction quality of parameters such as number of measurements, Murcko scaffold frequency or inhibitor type was assessed. Interpretation of the models enabled to highlight inhibitors and kinases properties correlated with higher affinities, and an analysis in the context of kinases crystal structures was performed. Overall, the models quality allows the accurate prediction of kinase-inhibitor activities and their structural interpretation, thus paving the way for the rational design of compounds with a targeted selectivity profile.

  17. Unprecedently Large-Scale Kinase Inhibitor Set Enabling the Accurate Prediction of Compound–Kinase Activities: A Way toward Selective Promiscuity by Design?

    PubMed Central

    2016-01-01

    Drug discovery programs frequently target members of the human kinome and try to identify small molecule protein kinase inhibitors, primarily for cancer treatment, additional indications being increasingly investigated. One of the challenges is controlling the inhibitors degree of selectivity, assessed by in vitro profiling against panels of protein kinases. We manually extracted, compiled, and standardized such profiles published in the literature: we collected 356 908 data points corresponding to 482 protein kinases, 2106 inhibitors, and 661 patents. We then analyzed this data set in terms of kinome coverage, results reproducibility, popularity, and degree of selectivity of both kinases and inhibitors. We used the data set to create robust proteochemometric models capable of predicting kinase activity (the ligand–target space was modeled with an externally validated RMSE of 0.41 ± 0.02 log units and R02 0.74 ± 0.03), in order to account for missing or unreliable measurements. The influence on the prediction quality of parameters such as number of measurements, Murcko scaffold frequency or inhibitor type was assessed. Interpretation of the models enabled to highlight inhibitors and kinases properties correlated with higher affinities, and an analysis in the context of kinases crystal structures was performed. Overall, the models quality allows the accurate prediction of kinase-inhibitor activities and their structural interpretation, thus paving the way for the rational design of compounds with a targeted selectivity profile. PMID:27482722

  18. Unprecedently Large-Scale Kinase Inhibitor Set Enabling the Accurate Prediction of Compound-Kinase Activities: A Way toward Selective Promiscuity by Design?

    PubMed

    Christmann-Franck, Serge; van Westen, Gerard J P; Papadatos, George; Beltran Escudie, Fanny; Roberts, Alexander; Overington, John P; Domine, Daniel

    2016-09-26

    Drug discovery programs frequently target members of the human kinome and try to identify small molecule protein kinase inhibitors, primarily for cancer treatment, additional indications being increasingly investigated. One of the challenges is controlling the inhibitors degree of selectivity, assessed by in vitro profiling against panels of protein kinases. We manually extracted, compiled, and standardized such profiles published in the literature: we collected 356 908 data points corresponding to 482 protein kinases, 2106 inhibitors, and 661 patents. We then analyzed this data set in terms of kinome coverage, results reproducibility, popularity, and degree of selectivity of both kinases and inhibitors. We used the data set to create robust proteochemometric models capable of predicting kinase activity (the ligand-target space was modeled with an externally validated RMSE of 0.41 ± 0.02 log units and R02 0.74 ± 0.03), in order to account for missing or unreliable measurements. The influence on the prediction quality of parameters such as number of measurements, Murcko scaffold frequency or inhibitor type was assessed. Interpretation of the models enabled to highlight inhibitors and kinases properties correlated with higher affinities, and an analysis in the context of kinases crystal structures was performed. Overall, the models quality allows the accurate prediction of kinase-inhibitor activities and their structural interpretation, thus paving the way for the rational design of compounds with a targeted selectivity profile. PMID:27482722

  19. Impaired High-Density Lipoprotein Anti-Oxidant Function Predicts Poor Outcome in Critically Ill Patients

    PubMed Central

    Schrutka, Lore; Goliasch, Georg; Meyer, Brigitte; Wurm, Raphael; Koller, Lorenz; Kriechbaumer, Lukas; Heinz, Gottfried; Pacher, Richard; Lang, Irene M

    2016-01-01

    Introduction Oxidative stress affects clinical outcome in critically ill patients. Although high-density lipoprotein (HDL) particles generally possess anti-oxidant capacities, deleterious properties of HDL have been described in acutely ill patients. The impact of anti-oxidant HDL capacities on clinical outcome in critically ill patients is unknown. We therefore analyzed the predictive value of anti-oxidant HDL function on mortality in an unselected cohort of critically ill patients. Method We prospectively enrolled 270 consecutive patients admitted to a university-affiliated intensive care unit (ICU) and determined anti-oxidant HDL function using the HDL oxidant index (HOI). Based on their HOI, the study population was stratified into patients with impaired anti-oxidant HDL function and the residual study population. Results During a median follow-up time of 9.8 years (IQR: 9.2 to 10.0), 69% of patients died. Cox regression analysis revealed a significant and independent association between impaired anti-oxidant HDL function and short-term mortality with an adjusted HR of 1.65 (95% CI 1.22–2.24; p = 0.001) as well as 10-year mortality with an adj. HR of 1.19 (95% CI 1.02–1.40; p = 0.032) when compared to the residual study population. Anti-oxidant HDL function correlated with the amount of oxidative stress as determined by Cu/Zn superoxide dismutase (r = 0.38; p<0.001). Conclusion Impaired anti-oxidant HDL function represents a strong and independent predictor of 30-day mortality as well as long-term mortality in critically ill patients. PMID:26978526

  20. Risk prediction of Critical Infrastructures against extreme natural hazards: local and regional scale analysis

    NASA Astrophysics Data System (ADS)

    Rosato, Vittorio; Hounjet, Micheline; Burzel, Andreas; Di Pietro, Antonio; Tofani, Alberto; Pollino, Maurizio; Giovinazzi, Sonia

    2016-04-01

    Natural hazard events can induce severe impacts on the built environment; they can hit wide and densely populated areas, where there is a large number of (inter)dependent technological systems whose damages could cause the failure or malfunctioning of further different services, spreading the impacts on wider geographical areas. The EU project CIPRNet (Critical Infrastructures Preparedness and Resilience Research Network) is realizing an unprecedented Decision Support System (DSS) which enables to operationally perform risk prediction on Critical Infrastructures (CI) by predicting the occurrence of natural events (from long term weather to short nowcast predictions, correlating intrinsic vulnerabilities of CI elements with the different events' manifestation strengths, and analysing the resulting Damage Scenario. The Damage Scenario is then transformed into an Impact Scenario, where punctual CI element damages are transformed into micro (local area) or meso (regional) scale Services Outages. At the smaller scale, the DSS simulates detailed city models (where CI dependencies are explicitly accounted for) that are of important input for crisis management organizations whereas, at the regional scale by using approximate System-of-Systems model describing systemic interactions, the focus is on raising awareness. The DSS has allowed to develop a novel simulation framework for predicting earthquakes shake maps originating from a given seismic event, considering the shock wave propagation in inhomogeneous media and the subsequent produced damages by estimating building vulnerabilities on the basis of a phenomenological model [1, 2]. Moreover, in presence of areas containing river basins, when abundant precipitations are expected, the DSS solves the hydrodynamic 1D/2D models of the river basins for predicting the flux runoff and the corresponding flood dynamics. This calculation allows the estimation of the Damage Scenario and triggers the evaluation of the Impact Scenario

  1. Predicting Fatigue and Psychophysiological Test Performance from Speech for Safety-Critical Environments

    PubMed Central

    Baykaner, Khan Richard; Huckvale, Mark; Whiteley, Iya; Andreeva, Svetlana; Ryumin, Oleg

    2015-01-01

    Automatic systems for estimating operator fatigue have application in safety-critical environments. A system which could estimate level of fatigue from speech would have application in domains where operators engage in regular verbal communication as part of their duties. Previous studies on the prediction of fatigue from speech have been limited because of their reliance on subjective ratings and because they lack comparison to other methods for assessing fatigue. In this paper, we present an analysis of voice recordings and psychophysiological test scores collected from seven aerospace personnel during a training task in which they remained awake for 60 h. We show that voice features and test scores are affected by both the total time spent awake and the time position within each subject’s circadian cycle. However, we show that time spent awake and time-of-day information are poor predictors of the test results, while voice features can give good predictions of the psychophysiological test scores and sleep latency. Mean absolute errors of prediction are possible within about 17.5% for sleep latency and 5–12% for test scores. We discuss the implications for the use of voice as a means to monitor the effects of fatigue on cognitive performance in practical applications. PMID:26380259

  2. Can Student Nurse Critical Thinking Be Predicted from Perceptions of Structural Empowerment within the Undergraduate, Pre-Licensure Learning Environment?

    ERIC Educational Resources Information Center

    Caswell-Moore, Shelley P.

    2013-01-01

    The purpose of this study was to test a model using Rosabeth Kanter's theory (1977; 1993) of structural empowerment to determine if this model can predict student nurses' level of critical thinking. Major goals of nursing education are to cultivate graduates who can think critically with a keen sense of clinical judgment, and who can perform…

  3. PredictSNP2: A Unified Platform for Accurately Evaluating SNP Effects by Exploiting the Different Characteristics of Variants in Distinct Genomic Regions.

    PubMed

    Bendl, Jaroslav; Musil, Miloš; Štourač, Jan; Zendulka, Jaroslav; Damborský, Jiří; Brezovský, Jan

    2016-05-01

    An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools' predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations. To

  4. PredictSNP2: A Unified Platform for Accurately Evaluating SNP Effects by Exploiting the Different Characteristics of Variants in Distinct Genomic Regions

    PubMed Central

    Brezovský, Jan

    2016-01-01

    An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools’ predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations

  5. Absolute Measurements of Macrophage Migration Inhibitory Factor and Interleukin-1-β mRNA Levels Accurately Predict Treatment Response in Depressed Patients

    PubMed Central

    Ferrari, Clarissa; Uher, Rudolf; Bocchio-Chiavetto, Luisella; Riva, Marco Andrea; Pariante, Carmine M.

    2016-01-01

    Background: Increased levels of inflammation have been associated with a poorer response to antidepressants in several clinical samples, but these findings have had been limited by low reproducibility of biomarker assays across laboratories, difficulty in predicting response probability on an individual basis, and unclear molecular mechanisms. Methods: Here we measured absolute mRNA values (a reliable quantitation of number of molecules) of Macrophage Migration Inhibitory Factor and interleukin-1β in a previously published sample from a randomized controlled trial comparing escitalopram vs nortriptyline (GENDEP) as well as in an independent, naturalistic replication sample. We then used linear discriminant analysis to calculate mRNA values cutoffs that best discriminated between responders and nonresponders after 12 weeks of antidepressants. As Macrophage Migration Inhibitory Factor and interleukin-1β might be involved in different pathways, we constructed a protein-protein interaction network by the Search Tool for the Retrieval of Interacting Genes/Proteins. Results: We identified cutoff values for the absolute mRNA measures that accurately predicted response probability on an individual basis, with positive predictive values and specificity for nonresponders of 100% in both samples (negative predictive value=82% to 85%, sensitivity=52% to 61%). Using network analysis, we identified different clusters of targets for these 2 cytokines, with Macrophage Migration Inhibitory Factor interacting predominantly with pathways involved in neurogenesis, neuroplasticity, and cell proliferation, and interleukin-1β interacting predominantly with pathways involved in the inflammasome complex, oxidative stress, and neurodegeneration. Conclusion: We believe that these data provide a clinically suitable approach to the personalization of antidepressant therapy: patients who have absolute mRNA values above the suggested cutoffs could be directed toward earlier access to more

  6. Dose Addition Models Based on Biologically Relevant Reductions in Fetal Testosterone Accurately Predict Postnatal Reproductive Tract Alterations by a Phthalate Mixture in Rats.

    PubMed

    Howdeshell, Kembra L; Rider, Cynthia V; Wilson, Vickie S; Furr, Johnathan R; Lambright, Christy R; Gray, L Earl

    2015-12-01

    Challenges in cumulative risk assessment of anti-androgenic phthalate mixtures include a lack of data on all the individual phthalates and difficulty determining the biological relevance of reduction in fetal testosterone (T) on postnatal development. The objectives of the current study were 2-fold: (1) to test whether a mixture model of dose addition based on the fetal T production data of individual phthalates would predict the effects of a 5 phthalate mixture on androgen-sensitive postnatal male reproductive tract development, and (2) to determine the biological relevance of the reductions in fetal T to induce abnormal postnatal reproductive tract development using data from the mixture study. We administered a dose range of the mixture (60, 40, 20, 10, and 5% of the top dose used in the previous fetal T production study consisting of 300 mg/kg per chemical of benzyl butyl (BBP), di(n)butyl (DBP), diethyl hexyl phthalate (DEHP), di-isobutyl phthalate (DiBP), and 100 mg dipentyl (DPP) phthalate/kg; the individual phthalates were present in equipotent doses based on their ability to reduce fetal T production) via gavage to Sprague Dawley rat dams on GD8-postnatal day 3. We compared observed mixture responses to predictions of dose addition based on the previously published potencies of the individual phthalates to reduce fetal T production relative to a reference chemical and published postnatal data for the reference chemical (called DAref). In addition, we predicted DA (called DAall) and response addition (RA) based on logistic regression analysis of all 5 individual phthalates when complete data were available. DA ref and DA all accurately predicted the observed mixture effect for 11 of 14 endpoints. Furthermore, reproductive tract malformations were seen in 17-100% of F1 males when fetal T production was reduced by about 25-72%, respectively. PMID:26350170

  7. Review article: shock index for prediction of critical bleeding post-trauma: a systematic review.

    PubMed

    Olaussen, Alexander; Blackburn, Todd; Mitra, Biswadev; Fitzgerald, Mark

    2014-06-01

    Early diagnosis of haemorrhagic shock (HS) might be difficult because of compensatory mechanisms. Clinical scoring systems aimed at predicting transfusion needs might assist in early identification of patients with HS. The Shock Index (SI) - defined as heart rate divided by systolic BP - has been proposed as a simple tool to identify patients with HS. This systematic review discusses the SI's utility post-trauma in predicting critical bleeding (CB). We searched the databases MEDLINE, Embase, CINAHL, Cochrane Library, Scopus and PubMed from their commencement to 1 September 2013. Studies that described an association with SI and CB, defined as at least 4 units of packed red blood cells (pRBC) or whole blood within 24 h, were included. Of the 351 located articles identified by the initial search strategy, five met inclusion criteria. One study pertained to the pre-hospital setting, one to the military, two to the in-hospital setting, and one included analysis of both pre-hospital and in-hospital values. The majority of papers assessed predictive properties of the SI in ≥10 units pRBC in the first 24 h. The most frequently suggested optimal SI cut-off was ≥0.9. An association between higher SI and bleeding was demonstrated in all studies. The SI is a readily available tool and may be useful in predicting CB on arrival to hospital. The evaluation of improved utility of the SI by performing and recording at earlier time-points, including the pre-hospital phase, is indicated.

  8. Critical Flicker Fusion Predicts Executive Function in Younger and Older Adults.

    PubMed

    Mewborn, Catherine; Renzi, Lisa M; Hammond, Billy R; Miller, L Stephen

    2015-11-01

    Critical flicker fusion (CFF), a measure of visual processing speed, has often been regarded as a basic metric underlying a number of higher cognitive functions. To test this, we measured CFF, global cognition, and several cognitive subdomains. Because age is a strong covariate for most of these variables, both younger (n = 72) and older (n = 57) subjects were measured. Consistent with expectations, age was inversely related to CFF and performance on all of the cognitive measures except for visual memory. In contrast, age-adjusted CFF thresholds were only positively related to executive function. Results showed that CFF predicted executive function across both age groups and accounted for unique variance in performance above and beyond age and global cognitive status. The current findings suggest that CFF may be a unique predictor of executive dysfunction. PMID:26370250

  9. Discovery of a general method of solving the Schrödinger and dirac equations that opens a way to accurately predictive quantum chemistry.

    PubMed

    Nakatsuji, Hiroshi

    2012-09-18

    Just as Newtonian law governs classical physics, the Schrödinger equation (SE) and the relativistic Dirac equation (DE) rule the world of chemistry. So, if we can solve these equations accurately, we can use computation to predict chemistry precisely. However, for approximately 80 years after the discovery of these equations, chemists believed that they could not solve SE and DE for atoms and molecules that included many electrons. This Account reviews ideas developed over the past decade to further the goal of predictive quantum chemistry. Between 2000 and 2005, I discovered a general method of solving the SE and DE accurately. As a first inspiration, I formulated the structure of the exact wave function of the SE in a compact mathematical form. The explicit inclusion of the exact wave function's structure within the variational space allows for the calculation of the exact wave function as a solution of the variational method. Although this process sounds almost impossible, it is indeed possible, and I have published several formulations and applied them to solve the full configuration interaction (CI) with a very small number of variables. However, when I examined analytical solutions for atoms and molecules, the Hamiltonian integrals in their secular equations diverged. This singularity problem occurred in all atoms and molecules because it originates from the singularity of the Coulomb potential in their Hamiltonians. To overcome this problem, I first introduced the inverse SE and then the scaled SE. The latter simpler idea led to immediate and surprisingly accurate solution for the SEs of the hydrogen atom, helium atom, and hydrogen molecule. The free complement (FC) method, also called the free iterative CI (free ICI) method, was efficient for solving the SEs. In the FC method, the basis functions that span the exact wave function are produced by the Hamiltonian of the system and the zeroth-order wave function. These basis functions are called complement

  10. Comparison of the RIFLE, AKIN and KDIGO criteria to predict mortality in critically ill patients

    PubMed Central

    Levi, Talita Machado; de Souza, Sérgio Pinto; de Magalhães, Janine Garcia; de Carvalho, Márcia Sampaio; Cunha, André Luiz Barreto; Dantas, João Gabriel Athayde de Oliveira; Cruz, Marília Galvão; Guimarães, Yasmin Laryssa Moura; Cruz, Constança Margarida Sampaio

    2013-01-01

    Objective Acute kidney injury is a common complication in critically ill patients, and the RIFLE, AKIN and KDIGO criteria are used to classify these patients. The present study's aim was to compare these criteria as predictors of mortality in critically ill patients. Methods Prospective cohort study using medical records as the source of data. All patients admitted to the intensive care unit were included. The exclusion criteria were hospitalization for less than 24 hours and death. Patients were followed until discharge or death. Student's t test, chi-squared analysis, a multivariate logistic regression and ROC curves were used for the data analysis. Results The mean patient age was 64 years old, and the majority of patients were women of African descent. According to RIFLE, the mortality rates were 17.74%, 22.58%, 24.19% and 35.48% for patients without acute kidney injury (AKI) in stages of Risk, Injury and Failure, respectively. For AKIN, the mortality rates were 17.74%, 29.03%, 12.90% and 40.32% for patients without AKI and at stage I, stage II and stage III, respectively. For KDIGO 2012, the mortality rates were 17.74%, 29.03%, 11.29% and 41.94% for patients without AKI and at stage I, stage II and stage III, respectively. All three classification systems showed similar ROC curves for mortality. Conclusion The RIFLE, AKIN and KDIGO criteria were good tools for predicting mortality in critically ill patients with no significant difference between them. PMID:24553510

  11. Tuning of Strouhal number for high propulsive efficiency accurately predicts how wingbeat frequency and stroke amplitude relate and scale with size and flight speed in birds.

    PubMed Central

    Nudds, Robert L.; Taylor, Graham K.; Thomas, Adrian L. R.

    2004-01-01

    The wing kinematics of birds vary systematically with body size, but we still, after several decades of research, lack a clear mechanistic understanding of the aerodynamic selection pressures that shape them. Swimming and flying animals have recently been shown to cruise at Strouhal numbers (St) corresponding to a regime of vortex growth and shedding in which the propulsive efficiency of flapping foils peaks (St approximately fA/U, where f is wingbeat frequency, U is cruising speed and A approximately bsin(theta/2) is stroke amplitude, in which b is wingspan and theta is stroke angle). We show that St is a simple and accurate predictor of wingbeat frequency in birds. The Strouhal numbers of cruising birds have converged on the lower end of the range 0.2 < St < 0.4 associated with high propulsive efficiency. Stroke angle scales as theta approximately 67b-0.24, so wingbeat frequency can be predicted as f approximately St.U/bsin(33.5b-0.24), with St0.21 and St0.25 for direct and intermittent fliers, respectively. This simple aerodynamic model predicts wingbeat frequency better than any other relationship proposed to date, explaining 90% of the observed variance in a sample of 60 bird species. Avian wing kinematics therefore appear to have been tuned by natural selection for high aerodynamic efficiency: physical and physiological constraints upon wing kinematics must be reconsidered in this light. PMID:15451698

  12. Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 Accurately Predicts Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions.

    PubMed

    Zuñiga, Cristal; Li, Chien-Ting; Huelsman, Tyler; Levering, Jennifer; Zielinski, Daniel C; McConnell, Brian O; Long, Christopher P; Knoshaug, Eric P; Guarnieri, Michael T; Antoniewicz, Maciek R; Betenbaugh, Michael J; Zengler, Karsten

    2016-09-01

    The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. PMID:27372244

  13. The Model for End-stage Liver Disease accurately predicts 90-day liver transplant wait-list mortality in Atlantic Canada

    PubMed Central

    Renfrew, Paul Douglas; Quan, Hude; Doig, Christopher James; Dixon, Elijah; Molinari, Michele

    2011-01-01

    OBJECTIVE: To determine the generalizability of the predictions for 90-day mortality generated by Model for End-stage Liver Disease (MELD) and the serum sodium augmented MELD (MELDNa) to Atlantic Canadian adults with end-stage liver disease awaiting liver transplantation (LT). METHODS: The predictive accuracy of the MELD and the MELDNa was evaluated by measurement of the discrimination and calibration of the respective models’ estimates for the occurrence of 90-day mortality in a consecutive cohort of LT candidates accrued over a five-year period. Accuracy of discrimination was measured by the area under the ROC curves. Calibration accuracy was evaluated by comparing the observed and model-estimated incidences of 90-day wait-list failure for the total cohort and within quantiles of risk. RESULTS: The area under the ROC curve for the MELD was 0.887 (95% CI 0.705 to 0.978) – consistent with very good accuracy of discrimination. The area under the ROC curve for the MELDNa was 0.848 (95% CI 0.681 to 0.965). The observed incidence of 90-day wait-list mortality in the validation cohort was 7.9%, which was not significantly different from the MELD estimate of 6.6% (95% CI 4.9% to 8.4%; P=0.177) or the MELDNa estimate of 5.8% (95% CI 3.5% to 8.0%; P=0.065). Global goodness-of-fit testing found no evidence of significant lack of fit for either model (Hosmer-Lemeshow χ2 [df=3] for MELD 2.941, P=0.401; for MELDNa 2.895, P=0.414). CONCLUSION: Both the MELD and the MELDNa accurately predicted the occurrence of 90-day wait-list mortality in the study cohort and, therefore, are generalizable to Atlantic Canadians with end-stage liver disease awaiting LT. PMID:21876856

  14. The VACS Index Accurately Predicts Mortality and Treatment Response among Multi-Drug Resistant HIV Infected Patients Participating in the Options in Management with Antiretrovirals (OPTIMA) Study

    PubMed Central

    Brown, Sheldon T.; Tate, Janet P.; Kyriakides, Tassos C.; Kirkwood, Katherine A.; Holodniy, Mark; Goulet, Joseph L.; Angus, Brian J.; Cameron, D. William; Justice, Amy C.

    2014-01-01

    Objectives The VACS Index is highly predictive of all-cause mortality among HIV infected individuals within the first few years of combination antiretroviral therapy (cART). However, its accuracy among highly treatment experienced individuals and its responsiveness to treatment interventions have yet to be evaluated. We compared the accuracy and responsiveness of the VACS Index with a Restricted Index of age and traditional HIV biomarkers among patients enrolled in the OPTIMA study. Methods Using data from 324/339 (96%) patients in OPTIMA, we evaluated associations between indices and mortality using Kaplan-Meier estimates, proportional hazards models, Harrel’s C-statistic and net reclassification improvement (NRI). We also determined the association between study interventions and risk scores over time, and change in score and mortality. Results Both the Restricted Index (c = 0.70) and VACS Index (c = 0.74) predicted mortality from baseline, but discrimination was improved with the VACS Index (NRI = 23%). Change in score from baseline to 48 weeks was more strongly associated with survival for the VACS Index than the Restricted Index with respective hazard ratios of 0.26 (95% CI 0.14–0.49) and 0.39(95% CI 0.22–0.70) among the 25% most improved scores, and 2.08 (95% CI 1.27–3.38) and 1.51 (95%CI 0.90–2.53) for the 25% least improved scores. Conclusions The VACS Index predicts all-cause mortality more accurately among multi-drug resistant, treatment experienced individuals and is more responsive to changes in risk associated with treatment intervention than an index restricted to age and HIV biomarkers. The VACS Index holds promise as an intermediate outcome for intervention research. PMID:24667813

  15. Toward Relatively General and Accurate Quantum Chemical Predictions of Solid-State 17O NMR Chemical Shifts in Various Biologically Relevant Oxygen-containing Compounds

    PubMed Central

    Rorick, Amber; Michael, Matthew A.; Yang, Liu; Zhang, Yong

    2015-01-01

    Oxygen is an important element in most biologically significant molecules and experimental solid-state 17O NMR studies have provided numerous useful structural probes to study these systems. However, computational predictions of solid-state 17O NMR chemical shift tensor properties are still challenging in many cases and in particular each of the prior computational work is basically limited to one type of oxygen-containing systems. This work provides the first systematic study of the effects of geometry refinement, method and basis sets for metal and non-metal elements in both geometry optimization and NMR property calculations of some biologically relevant oxygen-containing compounds with a good variety of XO bonding groups, X= H, C, N, P, and metal. The experimental range studied is of 1455 ppm, a major part of the reported 17O NMR chemical shifts in organic and organometallic compounds. A number of computational factors towards relatively general and accurate predictions of 17O NMR chemical shifts were studied to provide helpful and detailed suggestions for future work. For the studied various kinds of oxygen-containing compounds, the best computational approach results in a theory-versus-experiment correlation coefficient R2 of 0.9880 and mean absolute deviation of 13 ppm (1.9% of the experimental range) for isotropic NMR shifts and R2 of 0.9926 for all shift tensor properties. These results shall facilitate future computational studies of 17O NMR chemical shifts in many biologically relevant systems, and the high accuracy may also help refinement and determination of active-site structures of some oxygen-containing substrate bound proteins. PMID:26274812

  16. Toward Relatively General and Accurate Quantum Chemical Predictions of Solid-State (17)O NMR Chemical Shifts in Various Biologically Relevant Oxygen-Containing Compounds.

    PubMed

    Rorick, Amber; Michael, Matthew A; Yang, Liu; Zhang, Yong

    2015-09-01

    Oxygen is an important element in most biologically significant molecules, and experimental solid-state (17)O NMR studies have provided numerous useful structural probes to study these systems. However, computational predictions of solid-state (17)O NMR chemical shift tensor properties are still challenging in many cases, and in particular, each of the prior computational works is basically limited to one type of oxygen-containing system. This work provides the first systematic study of the effects of geometry refinement, method, and basis sets for metal and nonmetal elements in both geometry optimization and NMR property calculations of some biologically relevant oxygen-containing compounds with a good variety of XO bonding groups (X = H, C, N, P, and metal). The experimental range studied is of 1455 ppm, a major part of the reported (17)O NMR chemical shifts in organic and organometallic compounds. A number of computational factors toward relatively general and accurate predictions of (17)O NMR chemical shifts were studied to provide helpful and detailed suggestions for future work. For the studied kinds of oxygen-containing compounds, the best computational approach results in a theory-versus-experiment correlation coefficient (R(2)) value of 0.9880 and a mean absolute deviation of 13 ppm (1.9% of the experimental range) for isotropic NMR shifts and an R(2) value of 0.9926 for all shift-tensor properties. These results shall facilitate future computational studies of (17)O NMR chemical shifts in many biologically relevant systems, and the high accuracy may also help the refinement and determination of active-site structures of some oxygen-containing substrate-bound proteins.

  17. Prediction scores do not correlate with clinically adjudicated categories of pulmonary embolism in critically ill patients

    PubMed Central

    Katsios, CM; Donadini, M; Meade, M; Mehta, S; Hall, R; Granton, J; Kutsiogiannis, J; Dodek, P; Heels-Ansdell, D; McIntyre, L; Vlahakis, N; Muscedere, J; Friedrich, J; Fowler, R; Skrobik, Y; Albert, M; Cox, M; Klinger, J; Nates, J; Bersten, A; Doig, C; Zytaruk, N; Crowther, M; Cook, DJ

    2014-01-01

    BACKGROUND: Prediction scores for pretest probability of pulmonary embolism (PE) validated in outpatient settings are occasionally used in the intensive care unit (ICU). OBJECTIVE: To evaluate the correlation of Geneva and Wells scores with adjudicated categories of PE in ICU patients. METHODS: In a randomized trial of thromboprophylaxis, patients with suspected PE were adjudicated as possible, probable or definite PE. Data were then retrospectively abstracted for the Geneva Diagnostic PE score, Wells, Modified Wells and Simplified Wells Diagnostic scores. The chance-corrected agreement between adjudicated categories and each score was calculated. ANOVA was used to compare values across the three adjudicated PE categories. RESULTS: Among 70 patients with suspected PE, agreement was poor between adjudicated categories and Geneva pretest probabilities (kappa 0.01 [95% CI −0.0643 to 0.0941]) or Wells pretest probabilities (kappa −0.03 [95% CI −0.1462 to 0.0914]). Among four possible, 16 probable and 50 definite PEs, there were no significant differences in Geneva scores (possible = 4.0, probable = 4.7, definite = 4.5; P=0.90), Wells scores (possible = 2.8, probable = 4.9, definite = 4.1; P=0.37), Modified Wells (possible = 2.0, probable = 3.4, definite = 2.9; P=0.34) or Simplified Wells (possible = 1.8, probable = 2.8, definite = 2.4; P=0.30). CONCLUSIONS: Pretest probability scores developed outside the ICU do not correlate with adjudicated PE categories in critically ill patients. Research is needed to develop prediction scores for this population. PMID:24083302

  18. Deep vein thrombosis is accurately predicted by comprehensive analysis of the levels of microRNA-96 and plasma D-dimer

    PubMed Central

    Xie, Xuesheng; Liu, Changpeng; Lin, Wei; Zhan, Baoming; Dong, Changjun; Song, Zhen; Wang, Shilei; Qi, Yingguo; Wang, Jiali; Gu, Zengquan

    2016-01-01

    The aim of the present study was to investigate the association between platelet microRNA-96 (miR-96) expression levels and the occurrence of deep vein thrombosis (DVT) in orthopedic patients. A total of consecutive 69 orthopedic patients with DVT and 30 healthy individuals were enrolled. Ultrasonic color Doppler imaging was performed on lower limb veins after orthopedic surgery to determine the occurrence of DVT. An enzyme-linked fluorescent assay was performed to detect the levels of D-dimer in plasma. A quantitative polymerase chain reaction assay was performed to determine the expression levels of miR-96. Expression levels of platelet miR-96 were significantly increased in orthopedic patients after orthopedic surgery. miR-96 expression levels in orthopedic patients with DVT at days 1, 3 and 7 after orthopedic surgery were significantly increased when compared with those in the control group. The increased miR-96 expression levels were correlated with plasma D-dimer levels in orthopedic patients with DVT. However, for the orthopedic patients in the non-DVT group following surgery, miR-96 expression levels were correlated with plasma D-dimer levels. In summary, the present results suggest that the expression levels of miR-96 may be associated with the occurrence of DVT. The occurrence of DVT may be accurately predicted by comprehensive analysis of the levels of miR-96 and plasma D-dimer. PMID:27588107

  19. Improving prediction of soil carbon and dynamics at the Reynolds Creek Critical Zone Observatory

    NASA Astrophysics Data System (ADS)

    Lohse, Kathleen

    2015-04-01

    The Reynolds Creek Critical Zone Observatory (CZO) is being developed at the USDA-ARS Reynolds Creek Experimental Watershed in Southwestern Idaho to improve understanding and prediction of the processes governing soil carbon storage. Leveraging long-term (50 yr), spatially distributed hydroclimate data, the Reynolds Creek CZO is conducting a landscape-scale soil carbon survey, developing an environmental network for the measurement of water and carbon fluxes and calibration of land-surface models, and improving integrative modeling of carbon fluxes and stores. Preliminary soil survey data show that local topographic aspect controls of soil carbon storage can rival elevation-driven climatic controls in semi-arid environments. Lateral carbon export as surface water dissolved organic (range: 10-20 mg C/L) and inorganic carbon (range: 10-20 mg C/L) is surprisingly high in this environment. Cross CZO activities include estimating plant-atmospheric available water using multiple methods including soil based methods. Preliminary findings suggest that lateral carbon export in particulate as well as dissolved form may be an important carbon loss process in these semi-arid environments.

  20. Predicting maximal aerobic capacity (VO2max) from the critical velocity test in female collegiate rowers.

    PubMed

    Kendall, Kristina L; Fukuda, David H; Smith, Abbie E; Cramer, Joel T; Stout, Jeffrey R

    2012-03-01

    The objective of this study was to examine the relationship between the critical velocity (CV) test and maximal oxygen consumption (VO2max) and develop a regression equation to predict VO2max based on the CV test in female collegiate rowers. Thirty-five female (mean ± SD; age, 19.38 ± 1.3 years; height, 170.27 ± 6.07 cm; body mass, 69.58 ± 0.3 1 kg) collegiate rowers performed 2 incremental VO2max tests to volitional exhaustion on a Concept II Model D rowing ergometer to determine VO2max. After a 72-hour rest period, each rower completed 4 time trials at varying distances for the determination of CV and anaerobic rowing capacity (ARC). A positive correlation was observed between CV and absolute VO2max (r = 0.775, p < 0.001) and ARC and absolute VO2max (r = 0.414, p = 0.040). Based on the significant correlation analysis, a linear regression equation was developed to predict the absolute VO2max from CV and ARC (absolute VO2max = 1.579[CV] + 0.008[ARC] - 3.838; standard error of the estimate [SEE] = 0.192 L·min(-1)). Cross validation analyses were performed using an independent sample of 10 rowers. There was no significant difference between the mean predicted VO2max (3.02 L·min(-1)) and the observed VO2max (3.10 L·min(-1)). The constant error, SEE and validity coefficient (r) were 0.076 L·min(-1), 0.144 L·min(-1), and 0.72, respectively. The total error value was 0.155 L·min(-1). The positive relationship between CV, ARC, and VO2max suggests that the CV test may be a practical alternative to measuring the maximal oxygen uptake in the absence of a metabolic cart. Additional studies are needed to validate the regression equation using a larger sample size and different populations (junior- and senior-level female rowers) and to determine the accuracy of the equation in tracking changes after a training intervention.

  1. Single baseline serum creatinine measurements predict mortality in critically ill patients hospitalized for acute heart failure

    PubMed Central

    Schefold, Joerg C.; Hodoscek, Lea Majc; Blöchlinger, Stefan; Doehner, Wolfram; von Haehling, Stephan

    2015-01-01

    Abstract Background Acute heart failure (AHF) is a leading cause of death in critically ill patients and is often accompanied by significant renal dysfunction. Few data exist on the predictive value of measures of renal dysfunction in large cohorts of patients hospitalized for AHF. Methods Six hundred and eighteen patients hospitalized for AHF (300 male, aged 73.3 ± 10.3 years, 73% New York Heart Association Class 4, mean hospital length of stay 12.9 ± 7.7 days, 97% non‐ischaemic AHF) were included in a retrospective single‐centre data analysis. Echocardiographic data, serum creatinine/urea levels, estimated glomerular filtration rate (eGFR), and clinical/laboratory markers were recorded. Mean follow‐up time was 2.9 ± 2.1 years. All‐cause mortality was recorded, and univariate/multivariate analyses were performed. Results Normal renal function defined as eGFR > 90 mL/min/1.73 m2 was noted in only 3% of AHF patients at baseline. A significant correlation of left ventricular ejection fraction with serum creatinine levels and eGFR (all P < 0.002) was noted. All‐cause mortality rates were 12% (90 days) and 40% (at 2 years), respectively. In a multivariate model, increased age, higher New York Heart Association class at admission, higher total cholesterol levels, and lower eGFR independently predicted death. Patients with baseline eGFR < 30 mL/min/1.73 m2 had an exceptionally high risk of death (odds ratio 2.80, 95% confidence interval 1.52–5.15, P = 0.001). Conclusions In a large cohort of patients with mostly non‐ischaemic AHF, enhanced serum creatinine levels and reduced eGFR independently predict death. It appears that patients with eGFR < 30 mL/min/1.73 m2 have poorest survival rates. Our data add to mounting data indicating that impaired renal function is an important risk factor for non‐survival in patients hospitalized for AHF.

  2. Accurate Prediction of Hyperfine Coupling Constants in Muoniated and Hydrogenated Ethyl Radicals: Ab Initio Path Integral Simulation Study with Density Functional Theory Method.

    PubMed

    Yamada, Kenta; Kawashima, Yukio; Tachikawa, Masanori

    2014-05-13

    We performed ab initio path integral molecular dynamics (PIMD) simulations with a density functional theory (DFT) method to accurately predict hyperfine coupling constants (HFCCs) in the ethyl radical (CβH3-CαH2) and its Mu-substituted (muoniated) compound (CβH2Mu-CαH2). The substitution of a Mu atom, an ultralight isotope of the H atom, with larger nuclear quantum effect is expected to strongly affect the nature of the ethyl radical. The static conventional DFT calculations of CβH3-CαH2 find that the elongation of one Cβ-H bond causes a change in the shape of potential energy curve along the rotational angle via the imbalance of attractive and repulsive interactions between the methyl and methylene groups. Investigation of the methyl-group behavior including the nuclear quantum and thermal effects shows that an unbalanced CβH2Mu group with the elongated Cβ-Mu bond rotates around the Cβ-Cα bond in a muoniated ethyl radical, quite differently from the CβH3 group with the three equivalent Cβ-H bonds in the ethyl radical. These rotations couple with other molecular motions such as the methylene-group rocking motion (inversion), leading to difficulties in reproducing the corresponding barrier heights. Our PIMD simulations successfully predict the barrier heights to be close to the experimental values and provide a significant improvement in muon and proton HFCCs given by the static conventional DFT method. Further investigation reveals that the Cβ-Mu/H stretching motion, methyl-group rotation, methylene-group rocking motion, and HFCC values deeply intertwine with each other. Because these motions are different between the radicals, a proper description of the structural fluctuations reflecting the nuclear quantum and thermal effects is vital to evaluate HFCC values in theory to be comparable to the experimental ones. Accordingly, a fundamental difference in HFCC between the radicals arises from their intrinsic molecular motions at a finite temperature, in

  3. Predicting Critical Thinking Skills of University Students through Metacognitive Self-Regulation Skills and Chemistry Self-Efficacy

    ERIC Educational Resources Information Center

    Uzuntiryaki-Kondakci, Esen; Capa-Aydin, Yesim

    2013-01-01

    This study aimed at examining the extent to which metacognitive self-regulation and chemistry self-efficacy predicted critical thinking. Three hundred sixty-five university students participated in the study. Data were collected using appropriate dimensions of Motivated Strategies for Learning Questionnaire and College Chemistry Self-Efficacy…

  4. Urinary L-FABP predicts poor outcomes in critically ill patients with early acute kidney injury.

    PubMed

    Parr, Sharidan K; Clark, Amanda J; Bian, Aihua; Shintani, Ayumi K; Wickersham, Nancy E; Ware, Lorraine B; Ikizler, T Alp; Siew, Edward D

    2015-03-01

    Biomarker studies for early detection of acute kidney injury (AKI) have been limited by nonselective testing and uncertainties in using small changes in serum creatinine as a reference standard. Here we examine the ability of urine L-type fatty acid-binding protein (L-FABP), neutrophil gelatinase-associated lipocalin (NGAL), interleukin-18 (IL-18), and kidney injury molecule-1 (KIM-1) to predict injury progression, dialysis, or death within 7 days in critically ill adults with early AKI. Of 152 patients with known baseline creatinine examined, 36 experienced the composite outcome. Urine L-FABP demonstrated an area under the receiver-operating characteristic curve (AUC-ROC) of 0.79 (95% confidence interval 0.70-0.86), which improved to 0.82 (95% confidence interval 0.75-0.90) when added to the clinical model (AUC-ROC of 0.74). Urine NGAL, IL-18, and KIM-1 had AUC-ROCs of 0.65, 0.64, and 0.62, respectively, but did not significantly improve discrimination of the clinical model. The category-free net reclassification index improved with urine L-FABP (total net reclassification index for nonevents 31.0%) and urine NGAL (total net reclassification index for events 33.3%). However, only urine L-FABP significantly improved the integrated discrimination index. Thus, modest early changes in serum creatinine can help target biomarker measurement for determining prognosis with urine L-FABP, providing independent and additive prognostic information when combined with clinical predictors.

  5. Predictive power of the Braden scale for pressure sore risk in adult critical care patients: a comprehensive review.

    PubMed

    Cox, Jill

    2012-01-01

    Critical care is designed for managing the sickest patients within our healthcare system. Multiple factors associated with an increased likelihood of pressure ulcer development have been investigated in the critical care population. Nevertheless, there is a lack of consensus regarding which of these factors poses the greatest risk for pressure ulceration. While the Braden scale for pressure sore risk is the most commonly used tool for measuring pressure ulcer risk in the United States, research focusing on the cumulative Braden Scale score and subscale scores is lacking in the critical care population. This author conducted a literature review on pressure ulcer risk assessment in the critical care population, to include the predictive value of both the total score and the subscale scores. In this review, the subscales sensory perception, mobility, moisture, and friction/shear were found to be associated with an increased likelihood of pressure ulcer development; in contrast, the Activity and Nutrition subscales were not found to predict pressure ulcer development in this population. In order to more precisely quantify risk in the critically ill population, modification of the Braden scale or development of a critical care specific risk assessment tool may be indicated.

  6. Early lactate clearance for predicting active bleeding in critically ill patients with acute upper gastrointestinal bleeding: a retrospective study.

    PubMed

    Wada, Tomoki; Hagiwara, Akiyoshi; Uemura, Tatsuki; Yahagi, Naoki; Kimura, Akio

    2016-08-01

    Not all patients with upper gastrointestinal bleeding (UGIB) require emergency endoscopy. Lactate clearance has been suggested as a parameter for predicting patient outcomes in various critical care settings. This study investigates whether lactate clearance can predict active bleeding in critically ill patients with UGIB. This single-center, retrospective, observational study included critically ill patients with UGIB who met all of the following criteria: admission to the emergency department (ED) from April 2011 to August 2014; had blood samples for lactate evaluation at least twice during the ED stay; and had emergency endoscopy within 6 h of ED presentation. The main outcome was active bleeding detected with emergency endoscopy. Classification and regression tree (CART) analyses were performed using variables associated with active bleeding to derive a prediction rule for active bleeding in critically ill UGIB patients. A total of 154 patients with UGIB were analyzed, and 31.2 % (48/154) had active bleeding. In the univariate analysis, lactate clearance was significantly lower in patients with active bleeding than in those without active bleeding (13 vs. 29 %, P < 0.001). Using the CART analysis, a prediction rule for active bleeding is derived, and includes three variables: lactate clearance; platelet count; and systolic blood pressure at ED presentation. The rule has 97.9 % (95 % CI 90.2-99.6 %) sensitivity with 32.1 % (28.6-32.9 %) specificity. Lactate clearance may be associated with active bleeding in critically ill patients with UGIB, and may be clinically useful as a component of a prediction rule for active bleeding.

  7. The Value of Accurate Magnetic Resonance Characterization of Posterior Cruciate Ligament Tears in the Setting of Multiligament Knee Injury: Imaging Features Predictive of Early Repair vs Reconstruction.

    PubMed

    Goiney, Christoper C; Porrino, Jack; Twaddle, Bruce; Richardson, Michael L; Mulcahy, Hyojeong; Chew, Felix S

    2016-01-01

    Multiligament knee injury (MLKI) represents a complex set of pathologies treated with a wide variety of surgical approaches. If early surgical intervention is performed, the disrupted posterior cruciate ligament (PCL) can be treated with primary repair or reconstruction. The purpose of our study was to retrospectively identify a critical length of the distal component of the torn PCL on magnetic resonance imaging (MRI) that may predict the ability to perform early proximal femoral repair of the ligament, as opposed to reconstruction. A total of 50 MLKIs were managed at Harborview Medical Center from May 1, 2013, through July 15, 2014, by an orthopedic surgeon. Following exclusions, there were 27 knees with complete disruption of the PCL that underwent either early reattachment to the femoral insertion or reconstruction and were evaluated using preoperative MRI. In a consensus fashion, 2 radiologists measured the proximal and distal fragments of each disrupted PCL using preoperative MRI in multiple planes, as needed. MRI findings were correlated with what was performed at surgery. Those knees with a distal fragment PCL length of ≥41mm were capable of, and underwent, early proximal femoral repair. With repair, the distal stump was attached to the distal femur. Alternatively, those with a distal PCL length of ≤32mm could not undergo repair because of insufficient length and as such, were reconstructed. If early surgical intervention for an MLKI involving disruption of the PCL is considered, attention should be given to the length of the distal PCL fragment on MRI to plan appropriately for proximal femoral reattachment vs reconstruction. If the distal PCL fragment measures ≥41mm, surgical repair is achievable and can be considered as a surgical option.

  8. Investigating Predictive Role of Critical Thinking on Metacognition with Structural Equation Modeling

    ERIC Educational Resources Information Center

    Arslan, Serhat

    2015-01-01

    The purpose of this study is to examine the relationships between critical thinking and metacognition. The sample of study consists of 390 university students who were enrolled in different programs at Sakarya University, in Turkey. In this study, the Critical Thinking Disposition Scale and Metacognitive Thinking Scale were used. The relationships…

  9. A critical examination of the predictive capabilities of a new type of general laminated plate theory in the inelastic response regime

    SciTech Connect

    Williams, Todd O

    2008-01-01

    Recently, a new type of general, multiscale plate theory was developed for application to the analysis of the history-dependent response of laminated plates (Williams). In particular, the history-dependent behavior in a plate was considered to arise from both delamination effects as well as history-dependent material point responses (such as from viscoelasticity, viscoplasticity, damage, etc.). The multiscale nature of the theoretical framework is due to the use of a superposition of both general global and local displacement effects. Using this global-local displacement field the governing equations of the theory are obtained by satisfying the governing equations of nonlinear continuum mechanics referenced to the initial configuration. In order to accomplish the goal of conducting accurate analyses in the history-dependent response regimes the formulation of the theory has been carried out in a sufficiently general fashion that any cohesive zone model (CZM) and any history-dependent constitutive model for a material point can be incorporated into the analysis without reformulation. Recently, the older multiscale theory of Williams has been implemented into the finite element (FE) framework by Mourad et al. and the resulting capabilities where used to shown that in a qualitative sense it is important that the local fields be accurately obtained in order to correctly predict even the overall response characteristics of a laminated plate in the inelastic regime. The goal of this work is to critically examine the predictive capabilities of this theory, as well as the older multiscale theory of Williams and other types of laminated plate theories, with recently developed exact solutions for the response of inelastic plates in cylindrical bending (Williams). These exact solutions are valid for both nonlinear CZMs as well as inelastic material responses obtained from different constitutive theories. In particular, the accuracy with which the different plate theories

  10. Evaluation of cloud prediction and determination of critical relative humidity for a mesoscale numerical weather prediction model

    SciTech Connect

    Seaman, N.L.; Guo, Z.; Ackerman, T.P.

    1996-04-01

    Predictions of cloud occurrence and vertical location from the Pennsylvannia State University/National Center for Atmospheric Research nonhydrostatic mesoscale model (MM5) were evaluated statistically using cloud observations obtained at Coffeyville, Kansas, as part of the Second International satellite Cloud Climatology Project Regional Experiment campaign. Seventeen cases were selected for simulation during a November-December 1991 field study. MM5 was used to produce two sets of 36-km simulations, one with and one without four-dimensional data assimilation (FDDA), and a set of 12-km simulations without FDDA, but nested within the 36-km FDDA runs.

  11. Critical Velocity and Anaerobic Paddling Capacity Determined by Different Mathematical Models and Number of Predictive Trials in Canoe Slalom

    PubMed Central

    Messias, Leonardo H. D.; Ferrari, Homero G.; Reis, Ivan G. M.; Scariot, Pedro P. M.; Manchado-Gobatto, Fúlvia B.

    2015-01-01

    The purpose of this study was to analyze if different combinations of trials as well as mathematical models can modify the aerobic and anaerobic estimates from critical velocity protocol applied in canoe slalom. Fourteen male elite slalom kayakers from Brazilian canoe slalom team (K1) were evaluated. Athletes were submitted to four predictive trials of 150, 300, 450 and 600 meters in a lake and the time to complete each trial was recorded. Critical velocity (CV-aerobic parameter) and anaerobic paddling capacity (APC-anaerobic parameter) were obtained by three mathematical models (Linear1=distance-time; Linear 2=velocity-1/time and Non-Linear = time-velocity). Linear 1 was chosen for comparison of predictive trials combinations. Standard combination (SC) was considered as the four trials (150, 300, 450 and 600 m). High fits of regression were obtained from all mathematical models (range - R² = 0.96-1.00). Repeated measures ANOVA pointed out differences of all mathematical models for CV (p = 0.006) and APC (p = 0.016) as well as R² (p = 0.033). Estimates obtained from the first (1) and the fourth (4) predictive trials (150 m = lowest; and 600 m = highest, respectively) were similar and highly correlated (r=0.98 for CV and r = 0.96 for APC) with the SC. In summary, methodological aspects must be considered in critical velocity application in canoe slalom, since different combinations of trials as well as mathematical models resulted in different aerobic and anaerobic estimates. Key points Great attention must be given for methodological concerns regarding critical velocity protocol applied on canoe slalom, since different estimates were obtained depending on the mathematical model and the predictive trials used. Linear 1 showed the best fits of regression. Furthermore, to the best of our knowledge and considering practical applications, this model is the easiest one to calculate the estimates from critical velocity protocol. Considering this, the abyss between

  12. Critical velocity and anaerobic paddling capacity determined by different mathematical models and number of predictive trials in canoe slalom.

    PubMed

    Messias, Leonardo H D; Ferrari, Homero G; Reis, Ivan G M; Scariot, Pedro P M; Manchado-Gobatto, Fúlvia B

    2015-03-01

    The purpose of this study was to analyze if different combinations of trials as well as mathematical models can modify the aerobic and anaerobic estimates from critical velocity protocol applied in canoe slalom. Fourteen male elite slalom kayakers from Brazilian canoe slalom team (K1) were evaluated. Athletes were submitted to four predictive trials of 150, 300, 450 and 600 meters in a lake and the time to complete each trial was recorded. Critical velocity (CV-aerobic parameter) and anaerobic paddling capacity (APC-anaerobic parameter) were obtained by three mathematical models (Linear1=distance-time; Linear 2=velocity-1/time and Non-Linear = time-velocity). Linear 1 was chosen for comparison of predictive trials combinations. Standard combination (SC) was considered as the four trials (150, 300, 450 and 600 m). High fits of regression were obtained from all mathematical models (range - R² = 0.96-1.00). Repeated measures ANOVA pointed out differences of all mathematical models for CV (p = 0.006) and APC (p = 0.016) as well as R² (p = 0.033). Estimates obtained from the first (1) and the fourth (4) predictive trials (150 m = lowest; and 600 m = highest, respectively) were similar and highly correlated (r=0.98 for CV and r = 0.96 for APC) with the SC. In summary, methodological aspects must be considered in critical velocity application in canoe slalom, since different combinations of trials as well as mathematical models resulted in different aerobic and anaerobic estimates. Key pointsGreat attention must be given for methodological concerns regarding critical velocity protocol applied on canoe slalom, since different estimates were obtained depending on the mathematical model and the predictive trials used.Linear 1 showed the best fits of regression. Furthermore, to the best of our knowledge and considering practical applications, this model is the easiest one to calculate the estimates from critical velocity protocol. Considering this, the abyss between science

  13. Derivation and validation of the renal angina index to improve the prediction of acute kidney injury in critically ill children

    PubMed Central

    Basu, Rajit K.; Zappitelli, Michael; Brunner, Lori; Wang, Yu; Wong, Hector R.; Chawla, Lakhmir S.; Wheeler, Derek S.; Goldstein, Stuart L.

    2015-01-01

    Reliable prediction of severe acute kidney injury (AKI) has the potential to optimize treatment. Here we operationalized the empiric concept of renal angina with a renal angina index (RAI) and determined the predictive performance of RAI. This was assessed on admission to the pediatric intensive care unit, for subsequent severe AKI (over 200% rise in serum creatinine) 72 h later (Day-3 AKI). In a multicenter four cohort appraisal (one derivation and three validation), incidence rates for a Day 0 RAI of 8 or more were 15–68% and Day-3 AKI was 13–21%. In all cohorts, Day-3 AKI rates were higher in patients with an RAI of 8 or more with the area under the curve of RAI for predicting Day-3 AKI of 0.74–0.81. An RAI under 8 had high negative predictive values (92–99%) for Day-3 AKI. RAI outperformed traditional markers of pediatric severity of illness (Pediatric Risk of Mortality-II) and AKI risk factors alone for prediction of Day-3 AKI. Additionally, the RAI outperformed all KDIGO stages for prediction of Day-3 AKI. Thus, we operationalized the renal angina concept by deriving and validating the RAI for prediction of subsequent severe AKI. The RAI provides a clinically feasible and applicable methodology to identify critically ill children at risk of severe AKI lasting beyond functional injury. The RAI may potentially reduce capricious AKI biomarker use by identifying patients in whom further testing would be most beneficial. PMID:24048379

  14. Critical evaluation of in silico methods for prediction of coiled-coil domains in proteins.

    PubMed

    Li, Chen; Ching Han Chang, Catherine; Nagel, Jeremy; Porebski, Benjamin T; Hayashida, Morihiro; Akutsu, Tatsuya; Song, Jiangning; Buckle, Ashley M

    2016-03-01

    Coiled-coils refer to a bundle of helices coiled together like strands of a rope. It has been estimated that nearly 3% of protein-encoding regions of genes harbour coiled-coil domains (CCDs). Experimental studies have confirmed that CCDs play a fundamental role in subcellular infrastructure and controlling trafficking of eukaryotic cells. Given the importance of coiled-coils, multiple bioinformatics tools have been developed to facilitate the systematic and high-throughput prediction of CCDs in proteins. In this article, we review and compare 12 sequence-based bioinformatics approaches and tools for coiled-coil prediction. These approaches can be categorized into two classes: coiled-coil detection and coiled-coil oligomeric state prediction. We evaluated and compared these methods in terms of their input/output, algorithm, prediction performance, validation methods and software utility. All the independent testing data sets are available at http://lightning.med.monash.edu/coiledcoil/. In addition, we conducted a case study of nine human polyglutamine (PolyQ) disease-related proteins and predicted CCDs and oligomeric states using various predictors. Prediction results for CCDs were highly variable among different predictors. Only two peptides from two proteins were confirmed to be CCDs by majority voting. Both domains were predicted to form dimeric coiled-coils using oligomeric state prediction. We anticipate that this comprehensive analysis will be an insightful resource for structural biologists with limited prior experience in bioinformatics tools, and for bioinformaticians who are interested in designing novel approaches for coiled-coil and its oligomeric state prediction. PMID:26177815

  15. Critical assessment of methods of protein structure prediction-Round VII.

    PubMed

    Moult, John; Fidelis, Krzysztof; Kryshtafovych, Andriy; Rost, Burkhard; Hubbard, Tim; Tramontano, Anna

    2007-01-01

    This paper is an introduction to the supplemental issue of the journal PROTEINS, dedicated to the seventh CASP experiment to assess the state of the art in protein structure prediction. The paper describes the conduct of the experiment, the categories of prediction included, and outlines the evaluation and assessment procedures. Highlights are improvements in model accuracy relative to that obtainable from knowledge of a single best template structure; convergence of the accuracy of models produced by automatic servers toward that produced by human modeling teams; the emergence of methods for predicting the quality of models; and rapidly increasing practical applications of the methods. PMID:17918729

  16. Criticality Model Report

    SciTech Connect

    J.M. Scaglione

    2003-03-12

    The purpose of the ''Criticality Model Report'' is to validate the MCNP (CRWMS M&O 1998h) code's ability to accurately predict the effective neutron multiplication factor (k{sub eff}) for a range of conditions spanned by various critical configurations representative of the potential configurations commercial reactor assemblies stored in a waste package may take. Results of this work are an indication of the accuracy of MCNP for calculating eigenvalues, which will be used as input for criticality analyses for spent nuclear fuel (SNF) storage at the proposed Monitored Geologic Repository. The scope of this report is to document the development and validation of the criticality model. The scope of the criticality model is only applicable to commercial pressurized water reactor fuel. Valid ranges are established as part of the validation of the criticality model. This model activity follows the description in BSC (2002a).

  17. Accurate prediction of explicit solvent atom distribution in HIV-1 protease and F-ATP synthase by statistical theory of liquids

    NASA Astrophysics Data System (ADS)

    Sindhikara, Daniel; Yoshida, Norio; Hirata, Fumio

    2012-02-01

    We have created a simple algorithm for automatically predicting the explicit solvent atom distribution of biomolecules. The explicit distribution is coerced from the 3D continuous distribution resulting from a 3D-RISM calculation. This procedure predicts optimal location of solvent molecules and ions given a rigid biomolecular structure. We show examples of predicting water molecules near KNI-275 bound form of HIV-1 protease and predicting both sodium ions and water molecules near the rotor ring of F-ATP synthase. Our results give excellent agreement with experimental structure with an average prediction error of 0.45-0.65 angstroms. Further, unlike experimental methods, this method does not suffer from the partial occupancy limit. Our method can be performed directly on 3D-RISM output within minutes. It is useful not only as a location predictor but also as a convenient method for generating initial structures for MD calculations.

  18. NetMHC-3.0: accurate web accessible predictions of human, mouse and monkey MHC class I affinities for peptides of length 8-11.

    PubMed

    Lundegaard, Claus; Lamberth, Kasper; Harndahl, Mikkel; Buus, Søren; Lund, Ole; Nielsen, Morten

    2008-07-01

    NetMHC-3.0 is trained on a large number of quantitative peptide data using both affinity data from the Immune Epitope Database and Analysis Resource (IEDB) and elution data from SYFPEITHI. The method generates high-accuracy predictions of major histocompatibility complex (MHC): peptide binding. The predictions are based on artificial neural networks trained on data from 55 MHC alleles (43 Human and 12 non-human), and position-specific scoring matrices (PSSMs) for additional 67 HLA alleles. As only the MHC class I prediction server is available, predictions are possible for peptides of length 8-11 for all 122 alleles. artificial neural network predictions are given as actual IC(50) values whereas PSSM predictions are given as a log-odds likelihood scores. The output is optionally available as download for easy post-processing. The training method underlying the server is the best available, and has been used to predict possible MHC-binding peptides in a series of pathogen viral proteomes including SARS, Influenza and HIV, resulting in an average of 75-80% confirmed MHC binders. Here, the performance is further validated and benchmarked using a large set of newly published affinity data, non-redundant to the training set. The server is free of use and available at: http://www.cbs.dtu.dk/services/NetMHC.

  19. A critical discussion on the applicability of Compound Topographic Index (CTI) for predicting ephemeral gully erosion

    NASA Astrophysics Data System (ADS)

    Casalí, Javier; Chahor, Youssef; Giménez, Rafael; Campo-Bescós, Miguel

    2016-04-01

    The so-called Compound Topographic Index (CTI) can be calculated for each grid cell in a DEM and be used to identify potential locations for ephemeral gullies (e. g.) based on land topography (CTI = A.S.PLANC, where A is upstream drainage area, S is local slope and PLANC is planform curvature, a measure of the landscape convergence) (Parker et al., 2007). It can be shown that CTI represents stream power per unit bed area and it considers the major parameters controlling the pattern and intensity of concentrated surface runoff in the field (Parker et al., 2007). However, other key variables controlling e.g. erosion (e. g. e.) such as soil characteristics, land-use and management, are not had into consideration. The critical CTI value (CTIc) "represents the intensity of concentrated overland flow necessary to initiate erosion and channelised flow under a given set of circumstances" (Parker et al., 2007). AnnAGNPS (Annualized Agriculture Non-Point Source) pollution model is an important management tool developed by (USDA) and uses CTI to locate potential ephemeral gullies. Then, and depending on rainfall characteristics of the period simulated by AnnAGNPS, potential e. g. can become "actual", and be simulated by the model accordingly. This paper presents preliminary results and a number of considerations after evaluating the CTI tool in Navarre. CTIc values found are similar to those cited by other authors, and the e. g. networks that on average occur in the area have been located reasonably well. After our experience we believe that it is necessary to distinguish between the CTIc corresponding to the location of headcuts whose migrations originate the e. g. (CTIc1); and the CTIc necessary to represent the location of the gully networks in the watershed (CTIc2), where gully headcuts are located in the upstream end of the gullies. Most scientists only consider one CTIc value, although, from our point of view, the two situations are different. CTIc1 would represent the

  20. A critical discussion on the applicability of Compound Topographic Index (CTI) for predicting ephemeral gully erosion

    NASA Astrophysics Data System (ADS)

    Casalí, Javier; Chahor, Youssef; Giménez, Rafael; Campo-Bescós, Miguel

    2016-04-01

    The so-called Compound Topographic Index (CTI) can be calculated for each grid cell in a DEM and be used to identify potential locations for ephemeral gullies (e. g.) based on land topography (CTI = A.S.PLANC, where A is upstream drainage area, S is local slope and PLANC is planform curvature, a measure of the landscape convergence) (Parker et al., 2007). It can be shown that CTI represents stream power per unit bed area and it considers the major parameters controlling the pattern and intensity of concentrated surface runoff in the field (Parker et al., 2007). However, other key variables controlling e.g. erosion (e. g. e.) such as soil characteristics, land-use and management, are not had into consideration. The critical CTI value (CTIc) "represents the intensity of concentrated overland flow necessary to initiate erosion and channelised flow under a given set of circumstances" (Parker et al., 2007). AnnAGNPS (Annualized Agriculture Non-Point Source) pollution model is an important management tool developed by (USDA) and uses CTI to locate potential ephemeral gullies. Then, and depending on rainfall characteristics of the period simulated by AnnAGNPS, potential e. g. can become "actual", and be simulated by the model accordingly. This paper presents preliminary results and a number of considerations after evaluating the CTI tool in Navarre. CTIc values found are similar to those cited by other authors, and the e. g. networks that on average occur in the area have been located reasonably well. After our experience we believe that it is necessary to distinguish between the CTIc corresponding to the location of headcuts whose migrations originate the e. g. (CTIc1); and the CTIc necessary to represent the location of the gully networks in the watershed (CTIc2), where gully headcuts are located in the upstream end of the gullies. Most scientists only consider one CTIc value, although, from our point of view, the two situations are different. CTIc1 would represent the

  1. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  2. Predictive parameters for medical expulsive therapy in ureteral stones: a critical evaluation.

    PubMed

    Sahin, Cahit; Eryildirim, Bilal; Kafkasli, Alper; Coskun, Alper; Tarhan, Fatih; Faydaci, Gokhan; Sarica, Kemal

    2015-06-01

    To evaluate the predictive value of some certain radiological as well as stone-related parameters for medical expulsive therapy (MET) success with an alpha blocker in ureteral stones. A total 129 patients receiving MET for 5 to 10 mm ureteral stones were evaluated. Patients were divided into two subgroups where MET was successful in 64 cases (49.61%) and unsuccessful in 65 cases (50.39%). Prior to management, stone size, location, position in the ureter, degree of hydronephrosis, diameter of ureteral lumen proximal to the stone, ureteral wall thickness along with patient's demographics including body mass index (BMI) values were evaluated and recorded. The possible predictive values of these parameters for stone expulsion (and stone expulsion time) were evaluated in a comparative manner between two groups. The overall mean patient age and stone size values were 38.02 ± 0.94 years and 40.31 ± 1.13 mm(2), respectively. Regarding the predictive values of these parameters for MET-success, while stone size and localization, degree of hydronephrosis, proximal ureteral diameter and ureteral wall thickness were found to be highly predictive for MET-success, patients age, BMI values and stone density had no predictive value on this aspect. Our findings indicated that some stone and anatomical factors may be used to predict the success of MET in an effective manner. With this approach unnecessary use of these drugs that may cause a delay for stone removal will be avoided and the possible adverse effects of obstruction as well as stone-related clinical symptoms could be minimized.

  3. Effect of microorganism characteristics on leak size critical to predicting package sterility.

    PubMed

    Keller, Scott; Marcy, Joseph; Blakistone, Barbara; Hackney, Cameron; Carter, W Hans; Lacy, George

    2003-09-01

    The effects of microorganism size and motility on the leak size critical to the sterility of a package, along with the imposed pressure required to initiate liquid flow for the critical leak size, were measured. Pseudomonas fragi Lacy-1052, Bacillus atrophaeus ATCC 49337, and Enterobacter aerogenes ATCC 29007 were employed to assess package sterility. One hundred twenty-six 7-mm-long microtubes with interior diameters of 5, 10, and 20 microm were used to simulate package defects. Forty-two solid microtubes were used as controls. No significant differences were found between sizes or motility statuses of test organisms with respect to loss of sterility as a result of microbial ingress into test cells with microtube interior diameters of 5, 10, and 20 microm (P > 0.05). Interactions between the initiation of liquid flow as a result of applied threshold pressures and sterility loss for test cells were significant (P < 0.05).

  4. Changes in aortic blood flow induced by passive leg raising predict fluid responsiveness in critically ill patients

    PubMed Central

    Lafanechère, A; Pène, F; Goulenok, C; Delahaye, A; Mallet, V; Choukroun, G; Chiche, JD; Mira, JP; Cariou, A

    2006-01-01

    Introduction Esophageal Doppler provides a continuous and non-invasive estimate of descending aortic blood flow (ABF) and corrected left ventricular ejection time (LVETc). Considering passive leg raising (PLR) as a reversible volume expansion (VE), we compared the relative abilities of PLR-induced ABF variations, LVETc and respiratory pulsed pressure variations (ΔPP) to predict fluid responsiveness. Methods We studied 22 critically ill patients in acute circulatory failure in the supine position, during PLR, back to the supine position and after two consecutive VEs of 250 ml of saline. Responders were defined by an increase in ABF induced by 500 ml VE of more than 15%. Results Ten patients were responders and 12 were non-responders. In responders, the increase in ABF induced by PLR was similar to that induced by a 250 ml VE (16% versus 20%; p = 0.15). A PLR-induced increase in ABF of more than 8% predicted fluid responsiveness with a sensitivity of 90% and a specificity of 83%. Corresponding positive and negative predictive values (PPV and NPV, respectively) were 82% and 91%, respectively. A ΔPP threshold value of 12% predicted fluid responsiveness with a sensitivity of 70% and a specificity of 92%. Corresponding PPV and NPV were 87% and 78%, respectively. A LVETc of 245 ms or less predicted fluid responsiveness with a sensitivity of 70%, and a specificity of 67%. Corresponding PPV and NPV were 60% and 66%, respectively. Conclusion The PLR-induced increase in ABF and a ΔPP of more than 12% offer similar predictive values in predicting fluid responsiveness. An isolated basal LVETc value is not a reliable criterion for predicting response to fluid loading. PMID:16970817

  5. The Impact of Macro-and Micronutrients on Predicting Outcomes of Critically Ill Patients Requiring Continuous Renal Replacement Therapy

    PubMed Central

    Somlaw, Nicha; Lakananurak, Narisorn; Dissayabutra, Thasinas; Phonork, Chayanat; Leelahavanichkul, Asada; Tiranathanagul, Khajohn; Susantithapong, Paweena; Loaveeravat, Passisd; Suwachittanont, Nattachai; Wirotwan, Thaksa-on; Praditpornsilpa, Kearkiat; Tungsanga, Kriang; Eiam-Ong, Somchai; Kittiskulnam, Piyawan

    2016-01-01

    Critically ill patients with acute kidney injury (AKI) who receive renal replacement therapy (RRT) have very high mortality rate. During RRT, there are markedly loss of macro- and micronutrients which may cause malnutrition and result in impaired renal recovery and patient survival. We aimed to examine the predictive role of macro- and micronutrients on survival and renal outcomes in critically ill patients undergoing continuous RRT (CRRT). This prospective observational study enrolled critically ill patients requiring CRRT at Intensive Care Unit of King Chulalongkorn Memorial Hospital from November 2012 until November 2013. The serum, urine, and effluent fluid were serially collected on the first three days to calculate protein metabolism including dietary protein intake (DPI), nitrogen balance, and normalized protein catabolic rate (nPCR). Serum zinc, selenium, and copper were measured for micronutrients analysis on the first three days of CRRT. Survivor was defined as being alive on day 28 after initiation of CRRT.Dialysis status on day 28 was also determined. Of the 70 critically ill patients requiring CRRT, 27 patients (37.5%) survived on day 28. The DPI and serum albumin of survivors were significantly higher than non-survivors (0.8± 0.2 vs 0.5 ±0.3g/kg/day, p = 0.001, and 3.2±0.5 vs 2.9±0.5 g/dL, p = 0.03, respectively) while other markers were comparable. The DPI alone predicted patient survival with area under the curve (AUC) of 0.69. A combined clinical model predicted survival with AUC of 0.78. When adjusted for differences in albumin level, clinical severity score (APACHEII and SOFA score), and serum creatinine at initiation of CRRT, DPI still independently predicted survival (odds ratio 4.62, p = 0.009). The serum levels of micronutrients in both groups were comparable and unaltered following CRRT. Regarding renal outcome, patients in the dialysis independent group had higher serum albumin levels than the dialysis dependent group, p = 0.01. In

  6. Net reclassification indices for evaluating risk prediction instruments: a critical review.

    PubMed

    Kerr, Kathleen F; Wang, Zheyu; Janes, Holly; McClelland, Robyn L; Psaty, Bruce M; Pepe, Margaret S

    2014-01-01

    Net reclassification indices have recently become popular statistics for measuring the prediction increment of new biomarkers. We review the various types of net reclassification indices and their correct interpretations. We evaluate the advantages and disadvantages of quantifying the prediction increment with these indices. For predefined risk categories, we relate net reclassification indices to existing measures of the prediction increment. We also consider statistical methodology for constructing confidence intervals for net reclassification indices and evaluate the merits of hypothesis testing based on such indices. We recommend that investigators using net reclassification indices should report them separately for events (cases) and nonevents (controls). When there are two risk categories, the components of net reclassification indices are the same as the changes in the true- and false-positive rates. We advocate the use of true- and false-positive rates and suggest it is more useful for investigators to retain the existing, descriptive terms. When there are three or more risk categories, we recommend against net reclassification indices because they do not adequately account for clinically important differences in shifts among risk categories. The category-free net reclassification index is a new descriptive device designed to avoid predefined risk categories. However, it experiences many of the same problems as other measures such as the area under the receiver operating characteristic curve. In addition, the category-free index can mislead investigators by overstating the incremental value of a biomarker, even in independent validation data. When investigators want to test a null hypothesis of no prediction increment, the well-established tests for coefficients in the regression model are superior to the net reclassification index. If investigators want to use net reclassification indices, confidence intervals should be calculated using bootstrap methods

  7. Net Reclassification Indices for Evaluating Risk-Prediction Instruments: A Critical Review

    PubMed Central

    Kerr, Kathleen F.; Wang, Zheyu; Janes, Holly; McClelland, Robyn L.; Psaty, Bruce M.; Pepe, Margaret S.

    2014-01-01

    Net reclassification indices have recently become popular statistics for measuring the prediction increment of new biomarkers. We review the various types of net reclassification indices and their correct interpretations. We evaluate the advantages and disadvantages of quantifying the prediction increment with these indices. For pre-defined risk categories, we relate net reclassification indices to existing measures of the prediction increment. We also consider statistical methodology for constructing confidence intervals for net reclassification indices and evaluate the merits of hypothesis testing based on such indices. We recommend that investigators using net reclassification indices should report them separately for events (cases) and nonevents (controls). When there are two risk categories, the components of net reclassification indices are the same as the changes in the true-positive and false-positive rates. We advocate use of true- and false-positive rates and suggest it is more useful for investigators to retain the existing, descriptive terms. When there are three or more risk categories, we recommend against net reclassification indices because they do not adequately account for clinically important differences in shifts among risk categories. The category-free net reclassification index is a new descriptive device designed to avoid pre-defined risk categories. However, it suffers from many of the same problems as other measures such as the area under the receiver operating characteristic curve. In addition, the category-free index can mislead investigators by overstating the incremental value of a biomarker, even in independent validation data. When investigators want to test a null hypothesis of no prediction increment, the well-established tests for coefficients in the regression model are superior to the net reclassification index. If investigators want to use net reclassification indices, confidence intervals should be calculated using bootstrap

  8. FACTOR VIII MAY PREDICT CATHETER-RELATED THROMBOSIS IN CRITICALLY ILL CHILDREN: A PRELIMINARY STUDY

    PubMed Central

    Faustino, Edward Vincent S.; Li, Simon; Silva, Cicero T.; Pinto, Matthew G.; Qin, Li; Tala, Joana A.; Rinder, Henry M.; Kupfer, Gary M.; Shapiro, Eugene D.

    2015-01-01

    OBJECTIVE If we can identify critically ill children at high risk for central venous catheter-related thrombosis, then we could target them for pharmacologic thromboprophylaxis. We determined whether factor VIII activity or G value was associated with catheter-related thrombosis in critically ill children. DESIGN Prospective cohort study SETTING Two tertiary academic centers PATIENTS We enrolled children <18 years old who were admitted to the pediatric intensive care unit within 24 hours after insertion of a central venous catheter. We excluded children with a recently diagnosed thrombotic event or those anticipated to receive anticoagulation. Children with thrombosis diagnosed with surveillance ultrasonography on the day of enrollment were classified as having prevalent thrombosis. Those who developed catheter-related thrombosis thereafter were classified as having incident thrombosis. INTERVENTIONS None MEASUREMENTS AND MAIN RESULTS We enrolled 85 children in the study. Once enrolled, we measured factor VIII activity with one-stage clotting assay and determined G value with thromboelastography. Of those enrolled, 25 had incident and 12 had prevalent thromboses. The odds ratio for incident thrombosis per standard deviation increase in factor VIII activity was 1.98 (95% confidence interval: 1.10-3.55). The area under the receiver operating characteristic curve was 0.66 (95% confidence interval: 0.52-0.79). At factor VIII activity >100 IU/dL, which was the optimal threshold identified using Youden index, sensitivity and specificity were 92.0% and 41.3%, respectively. The association between factor VIII activity and incident thrombosis remained significant after adjusting for important clinical predictors of thrombosis (odds ratio: 1.93; 95% confidence interval: 1.10-3.39). G value was associated with prevalent but not with incident thrombosis. CONCLUSION Factor VIII activity may be used to stratify critically ill children based on their risk for catheter

  9. Critically Assessing the Predictive Power of QSAR Models for Human Liver Microsomal Stability.

    PubMed

    Liu, Ruifeng; Schyman, Patric; Wallqvist, Anders

    2015-08-24

    To lower the possibility of late-stage failures in the drug development process, an up-front assessment of absorption, distribution, metabolism, elimination, and toxicity is commonly implemented through a battery of in silico and in vitro assays. As in vitro data is accumulated, in silico quantitative structure-activity relationship (QSAR) models can be trained and used to assess compounds even before they are synthesized. Even though it is generally recognized that QSAR model performance deteriorates over time, rigorous independent studies of model performance deterioration is typically hindered by the lack of publicly available large data sets of structurally diverse compounds. Here, we investigated predictive properties of QSAR models derived from an assembly of publicly available human liver microsomal (HLM) stability data using variable nearest neighbor (v-NN) and random forest (RF) methods. In particular, we evaluated the degree of time-dependent model performance deterioration. Our results show that when evaluated by 10-fold cross-validation with all available HLM data randomly distributed among 10 equal-sized validation groups, we achieved high-quality model performance from both machine-learning methods. However, when we developed HLM models based on when the data appeared and tried to predict data published later, we found that neither method produced predictive models and that their applicability was dramatically reduced. On the other hand, when a small percentage of randomly selected compounds from data published later were included in the training set, performance of both machine-learning methods improved significantly. The implication is that 1) QSAR model quality should be analyzed in a time-dependent manner to assess their true predictive power and 2) it is imperative to retrain models with any up-to-date experimental data to ensure maximum applicability. PMID:26170251

  10. A critical evaluation of the predictions of the NASA-Lockheed multielement airfoil computer program

    NASA Technical Reports Server (NTRS)

    Brune, G. W.; Manke, J. W.

    1978-01-01

    Theoretical predictions of several versions of the multielement airfoil computer program are evaluated. The computed results are compared with experimental high lift data of general aviation airfoils with a single trailing edge flap, and of airfoils with a leading edge flap and double slotted trailing edge flaps. Theoretical and experimental data include lift, pitching moment, profile drag and surface pressure distributions, boundary layer integral parameters, skin friction coefficients, and velocity profiles.

  11. Critically Assessing the Predictive Power of QSAR Models for Human Liver Microsomal Stability.

    PubMed

    Liu, Ruifeng; Schyman, Patric; Wallqvist, Anders

    2015-08-24

    To lower the possibility of late-stage failures in the drug development process, an up-front assessment of absorption, distribution, metabolism, elimination, and toxicity is commonly implemented through a battery of in silico and in vitro assays. As in vitro data is accumulated, in silico quantitative structure-activity relationship (QSAR) models can be trained and used to assess compounds even before they are synthesized. Even though it is generally recognized that QSAR model performance deteriorates over time, rigorous independent studies of model performance deterioration is typically hindered by the lack of publicly available large data sets of structurally diverse compounds. Here, we investigated predictive properties of QSAR models derived from an assembly of publicly available human liver microsomal (HLM) stability data using variable nearest neighbor (v-NN) and random forest (RF) methods. In particular, we evaluated the degree of time-dependent model performance deterioration. Our results show that when evaluated by 10-fold cross-validation with all available HLM data randomly distributed among 10 equal-sized validation groups, we achieved high-quality model performance from both machine-learning methods. However, when we developed HLM models based on when the data appeared and tried to predict data published later, we found that neither method produced predictive models and that their applicability was dramatically reduced. On the other hand, when a small percentage of randomly selected compounds from data published later were included in the training set, performance of both machine-learning methods improved significantly. The implication is that 1) QSAR model quality should be analyzed in a time-dependent manner to assess their true predictive power and 2) it is imperative to retrain models with any up-to-date experimental data to ensure maximum applicability.

  12. Protein Binding of β-Lactam Antibiotics in Critically Ill Patients: Can We Successfully Predict Unbound Concentrations?

    PubMed Central

    Wong, Gloria; Briscoe, Scott; Adnan, Syamhanin; McWhinney, Brett; Ungerer, Jacobus; Lipman, Jeffrey

    2013-01-01

    The use of therapeutic drug monitoring (TDM) to optimize beta-lactam dosing in critically ill patients is growing in popularity, although there are limited data describing the potential impact of altered protein binding on achievement of target concentrations. The aim of this study was to compare the measured unbound concentration to the unbound concentration predicted from published protein binding values for seven beta-lactams using data from blood samples obtained from critically ill patients. From 161 eligible patients, we obtained 228 and 220 plasma samples at the midpoint of the dosing interval and trough, respectively, for ceftriaxone, cefazolin, meropenem, piperacillin, ampicillin, benzylpenicillin, and flucloxacillin. The total and unbound beta-lactam concentrations were measured using validated methods. Variabilities in both unbound and total concentrations were marked for all antibiotics, with significant differences being present between measured and predicted unbound concentrations for ceftriaxone and for flucloxacillin at the mid-dosing interval (P < 0.05). The predictive performance for calculating unbound concentrations using published protein binding values was poor, with bias for overprediction of unbound concentrations for ceftriaxone (83.3%), flucloxacillin (56.8%), and benzylpenicillin (25%) and underprediction for meropenem (12.1%). Linear correlations between the measured total and unbound concentrations were observed for all beta-lactams (R2 = 0.81 to 1.00; P < 0.05) except ceftriaxone and flucloxacillin. The percent protein binding of flucloxacillin and the plasma albumin concentration were also found to be linearly correlated (R2 = 0.776; P < 0.01). In conclusion, significant differences between measured and predicted unbound drug concentrations were found only for the highly protein-bound beta-lactams ceftriaxone and flucloxacillin. However, direct measurement of unbound drug in research and clinical practice is suggested for selected

  13. Use of quantitative shape-activity relationships to model the photoinduced toxicity of polycyclic aromatic hydrocarbons: Electron density shape features accurately predict toxicity

    SciTech Connect

    Mezey, P.G.; Zimpel, Z.; Warburton, P.; Walker, P.D.; Irvine, D.G.; Huang, X.D.; Dixon, D.G.; Greenberg, B.M.

    1998-07-01

    The quantitative shape-activity relationship (QShAR) methodology, based on accurate three-dimensional electron densities and detailed shape analysis methods, has been applied to a Lemna gibba photoinduced toxicity data set of 16 polycyclic aromatic hydrocarbon (PAH) molecules. In the first phase of the studies, a shape fragment QShAR database of PAHs was developed. The results provide a very good match to toxicity based on a combination of the local shape features of single rings in comparison to the central ring of anthracene and a more global shape feature involving larger molecular fragments. The local shape feature appears as a descriptor of the susceptibility of PAHs to photomodification and the global shape feature is probably related to photosensitization activity.

  14. Use of dose-dependent absorption into target tissues to more accurately predict cancer risk at low oral doses of hexavalent chromium.

    PubMed

    Haney, J

    2015-02-01

    The mouse dose at the lowest water concentration used in the National Toxicology Program hexavalent chromium (CrVI) drinking water study (NTP, 2008) is about 74,500 times higher than the approximate human dose corresponding to the 35-city geometric mean reported in EWG (2010) and over 1000 times higher than that based on the highest reported tap water concentration. With experimental and environmental doses differing greatly, it is a regulatory challenge to extrapolate high-dose results to environmental doses orders of magnitude lower in a meaningful and toxicologically predictive manner. This seems particularly true for the low-dose extrapolation of results for oral CrVI-induced carcinogenesis since dose-dependent differences in the dose fraction absorbed by mouse target tissues are apparent (Kirman et al., 2012). These data can be used for a straightforward adjustment of the USEPA (2010) draft oral slope factor (SFo) to be more predictive of risk at environmentally-relevant doses. More specifically, the evaluation of observed and modeled differences in the fraction of dose absorbed by target tissues at the point-of-departure for the draft SFo calculation versus lower doses suggests that the draft SFo be divided by a dose-specific adjustment factor of at least an order of magnitude to be less over-predictive of risk at more environmentally-relevant doses.

  15. Prediction of risk and incidence of dry eye in critical patients1

    PubMed Central

    de Araújo, Diego Dias; Almeida, Natália Gherardi; Silva, Priscila Marinho Aleixo; Ribeiro, Nayara Souza; Werli-Alvarenga, Andreza; Chianca, Tânia Couto Machado

    2016-01-01

    Objectives: to estimate the incidence of dry eye, to identify risk factors and to establish a risk prediction model for its development in adult patients admitted to the intensive care unit of a public hospital. Method: concurrent cohort, conducted between March and June, 2014, with 230 patients admitted to an intensive care unit. Data were analyzed by bivariate descriptive statistics, with multivariate survival analysis and Cox regression. Results: 53% out of 230 patients have developed dry eye, with onset mean time of 3.5 days. Independent variables that significantly and concurrently impacted the time for dry eye to occur were: O2 in room air, blinking more than five times per minute (lower risk factors) and presence of vascular disease (higher risk factor). Conclusion: dry eye is a common finding in patients admitted to adults intensive care units, and care for its prevention should be established. PMID:27192415

  16. Serum creatinine level, a surrogate of muscle mass, predicts mortality in critically ill patients

    PubMed Central

    Thongprayoon, Charat; Cheungpasitporn, Wisit

    2016-01-01

    Serum creatinine (SCr) has been widely used to estimate glomerular filtration rate (GFR). Creatinine generation could be reduced in the setting of low skeletal muscle mass. Thus, SCr has also been used as a surrogate of muscle mass. Low muscle mass is associated with reduced survival in hospitalized patients, especially in the intensive care unit (ICU) settings. Recently, studies have demonstrated high mortality in ICU patients with low admission SCr levels, reflecting that low muscle mass or malnutrition, are associated with increased mortality. However, SCr levels can also be influenced by multiple GFR- and non-GFR-related factors including age, diet, exercise, stress, pregnancy, and kidney disease. Imaging techniques, such as computed tomography (CT) and ultrasound, have recently been studied for muscle mass assessment and demonstrated promising data. This article aims to present the perspectives of the uses of SCr and other methods for prediction of muscle mass and outcomes of ICU patients. PMID:27162688

  17. A model to predict the critical velocity for liquid loss from a receding meniscus

    NASA Astrophysics Data System (ADS)

    Shedd, Timothy A.; Schuetter, Scott D.; Nellis, Gregory F.; Van Peski, Chris K.

    2006-10-01

    This paper is a revision of the authors' previous work entitled "Experimental characterization of the receding meniscus under conditions associated with immersion lithography," presented in Optical Microlithography XIX, edited by Donis G. Flagello, Proceedings of SPIE Vol. 6154 (SPIE, Bellingham, WA, 2006) 61540R. Several engineering challenges accompany the insertion of the immersion fluid in a production tool, one of the most important being the confinement of a relatively small amount of liquid to the under-lens region. The semiconductor industry demands high throughput, leading to relatively large wafer scan velocities and accelerations. These result in large viscous and inertial forces on the three-phase contact line between the liquid, air, and substrate. If the fluid dynamic forces exceed the resisting surface tension force then residual liquid is deposited onto the substrate that has passed beneath the lens. Liquid deposition is undesirable; as the droplets evaporate they will deposit impurities on the substrate. In an immersion lithography tool, these impurities may be transmitted to the printed pattern as defects. A substantial effort was undertaken relative to the experimental investigation of the static and dynamic contact angle under conditions that are consistent with immersion lithography. A semi-empirical model is described here in order to predict the velocity at which liquid loss occurs. This model is based on fluid physics and correlated to measurements of the dynamic and static contact angles. The model describes two regimes, an inertial and a capillary regime, that are characterized by two distinct liquid loss processes. The semi-empirical model provides the semiconductor industry with a useful predictive tool for reducing defects associated with film pulling.

  18. PredPPCrys: Accurate Prediction of Sequence Cloning, Protein Production, Purification and Crystallization Propensity from Protein Sequences Using Multi-Step Heterogeneous Feature Fusion and Selection

    PubMed Central

    Wang, Huilin; Wang, Mingjun; Tan, Hao; Li, Yuan; Zhang, Ziding; Song, Jiangning

    2014-01-01

    X-ray crystallography is the primary approach to solve the three-dimensional structure of a protein. However, a major bottleneck of this method is the failure of multi-step experimental procedures to yield diffraction-quality crystals, including sequence cloning, protein material production, purification, crystallization and ultimately, structural determination. Accordingly, prediction of the propensity of a protein to successfully undergo these experimental procedures based on the protein sequence may help narrow down laborious experimental efforts and facilitate target selection. A number of bioinformatics methods based on protein sequence information have been developed for this purpose. However, our knowledge on the important determinants of propensity for a protein sequence to produce high diffraction-quality crystals remains largely incomplete. In practice, most of the existing methods display poorer performance when evaluated on larger and updated datasets. To address this problem, we constructed an up-to-date dataset as the benchmark, and subsequently developed a new approach termed ‘PredPPCrys’ using the support vector machine (SVM). Using a comprehensive set of multifaceted sequence-derived features in combination with a novel multi-step feature selection strategy, we identified and characterized the relative importance and contribution of each feature type to the prediction performance of five individual experimental steps required for successful crystallization. The resulting optimal candidate features were used as inputs to build the first-level SVM predictor (PredPPCrys I). Next, prediction outputs of PredPPCrys I were used as the input to build second-level SVM classifiers (PredPPCrys II), which led to significantly enhanced prediction performance. Benchmarking experiments indicated that our PredPPCrys method outperforms most existing procedures on both up-to-date and previous datasets. In addition, the predicted crystallization targets of

  19. Normal Tissue Complication Probability Estimation by the Lyman-Kutcher-Burman Method Does Not Accurately Predict Spinal Cord Tolerance to Stereotactic Radiosurgery

    SciTech Connect

    Daly, Megan E.; Luxton, Gary; Choi, Clara Y.H.; Gibbs, Iris C.; Chang, Steven D.; Adler, John R.; Soltys, Scott G.

    2012-04-01

    Purpose: To determine whether normal tissue complication probability (NTCP) analyses of the human spinal cord by use of the Lyman-Kutcher-Burman (LKB) model, supplemented by linear-quadratic modeling to account for the effect of fractionation, predict the risk of myelopathy from stereotactic radiosurgery (SRS). Methods and Materials: From November 2001 to July 2008, 24 spinal hemangioblastomas in 17 patients were treated with SRS. Of the tumors, 17 received 1 fraction with a median dose of 20 Gy (range, 18-30 Gy) and 7 received 20 to 25 Gy in 2 or 3 sessions, with cord maximum doses of 22.7 Gy (range, 17.8-30.9 Gy) and 22.0 Gy (range, 20.2-26.6 Gy), respectively. By use of conventional values for {alpha}/{beta}, volume parameter n, 50% complication probability dose TD{sub 50}, and inverse slope parameter m, a computationally simplified implementation of the LKB model was used to calculate the biologically equivalent uniform dose and NTCP for each treatment. Exploratory calculations were performed with alternate values of {alpha}/{beta} and n. Results: In this study 1 case (4%) of myelopathy occurred. The LKB model using radiobiological parameters from Emami and the logistic model with parameters from Schultheiss overestimated complication rates, predicting 13 complications (54%) and 18 complications (75%), respectively. An increase in the volume parameter (n), to assume greater parallel organization, improved the predictive value of the models. Maximum-likelihood LKB fitting of {alpha}/{beta} and n yielded better predictions (0.7 complications), with n = 0.023 and {alpha}/{beta} = 17.8 Gy. Conclusions: The spinal cord tolerance to the dosimetry of SRS is higher than predicted by the LKB model using any set of accepted parameters. Only a high {alpha}/{beta} value in the LKB model and only a large volume effect in the logistic model with Schultheiss data could explain the low number of complications observed. This finding emphasizes that radiobiological models

  20. The CUPIC algorithm: an accurate model for the prediction of sustained viral response under telaprevir or boceprevir triple therapy in cirrhotic patients.

    PubMed

    Boursier, J; Ducancelle, A; Vergniol, J; Veillon, P; Moal, V; Dufour, C; Bronowicki, J-P; Larrey, D; Hézode, C; Zoulim, F; Fontaine, H; Canva, V; Poynard, T; Allam, S; De Lédinghen, V

    2015-12-01

    Triple therapy using boceprevir or telaprevir remains the reference treatment for genotype 1 chronic hepatitis C in countries where new interferon-free regimens have not yet become available. Antiviral treatment is highly required in cirrhotic patients, but they represent a difficult-to-treat population. We aimed to develop a simple algorithm for the prediction of sustained viral response (SVR) in cirrhotic patients treated with triple therapy. A total of 484 cirrhotic patients from the ANRS CO20 CUPIC cohort treated with triple therapy were randomly distributed into derivation and validation sets. A total of 52.1% of patients achieved SVR. In the derivation set, a D0 score for the prediction of SVR before treatment initiation included the following independent predictors collected at day 0: prior treatment response, gamma-GT, platelets, telaprevir treatment, viral load. To refine the prediction at the early phase of the treatment, a W4 score included as additional parameter the viral load collected at week 4. The D0 and W4 scores were combined in the CUPIC algorithm defining three subgroups: 'no treatment initiation or early stop at week 4', 'undetermined' and 'SVR highly probable'. In the validation set, the rates of SVR in these three subgroups were, respectively, 11.1%, 50.0% and 82.2% (P < 0.001). By replacing the variable 'prior treatment response' with 'IL28B genotype', another algorithm was derived for treatment-naïve patients with similar results. The CUPIC algorithm is an easy-to-use tool that helps physicians weigh their decision between immediately treating cirrhotic patients using boceprevir/telaprevir triple therapy or waiting for new drugs to become available in their country. PMID:26216230

  1. Outcome prediction based on microarray analysis: a critical perspective on methods

    PubMed Central

    Zervakis, Michalis; Blazadonakis, Michalis E; Tsiliki, Georgia; Danilatou, Vasiliki; Tsiknakis, Manolis; Kafetzopoulos, Dimitris

    2009-01-01

    Background Information extraction from microarrays has not yet been widely used in diagnostic or prognostic decision-support systems, due to the diversity of results produced by the available techniques, their instability on different data sets and the inability to relate statistical significance with biological relevance. Thus, there is an urgent need to address the statistical framework of microarray analysis and identify its drawbacks and limitations, which will enable us to thoroughly compare methodologies under the same experimental set-up and associate results with confidence intervals meaningful to clinicians. In this study we consider gene-selection algorithms with the aim to reveal inefficiencies in performance evaluation and address aspects that can reduce uncertainty in algorithmic validation. Results A computational study is performed related to the performance of several gene selection methodologies on publicly available microarray data. Three basic types of experimental scenarios are evaluated, i.e. the independent test-set and the 10-fold cross-validation (CV) using maximum and average performance measures. Feature selection methods behave differently under different validation strategies. The performance results from CV do not mach well those from the independent test-set, except for the support vector machines (SVM) and the least squares SVM methods. However, these wrapper methods achieve variable (often low) performance, whereas the hybrid methods attain consistently higher accuracies. The use of an independent test-set within CV is important for the evaluation of the predictive power of algorithms. The optimal size of the selected gene-set also appears to be dependent on the evaluation scheme. The consistency of selected genes over variation of the training-set is another aspect important in reducing uncertainty in the evaluation of the derived gene signature. In all cases the presence of outlier samples can seriously affect algorithmic

  2. Ab initio molecular dynamics of liquid water using embedded-fragment second-order many-body perturbation theory towards its accurate property prediction

    PubMed Central

    Willow, Soohaeng Yoo; Salim, Michael A.; Kim, Kwang S.; Hirata, So

    2015-01-01

    A direct, simultaneous calculation of properties of a liquid using an ab initio electron-correlated theory has long been unthinkable. Here we present structural, dynamical, and response properties of liquid water calculated by ab initio molecular dynamics using the embedded-fragment spin-component-scaled second-order many-body perturbation method with the aug-cc-pVDZ basis set. This level of theory is chosen as it accurately and inexpensively reproduces the water dimer potential energy surface from the coupled-cluster singles, doubles, and noniterative triples with the aug-cc-pVQZ basis set, which is nearly exact. The calculated radial distribution function, self-diffusion coefficient, coordinate number, and dipole moment, as well as the infrared and Raman spectra are in excellent agreement with experimental results. The shapes and widths of the OH stretching bands in the infrared and Raman spectra and their isotropic-anisotropic Raman noncoincidence, which reflect the diverse local hydrogen-bond environment, are also reproduced computationally. The simulation also reveals intriguing dynamic features of the environment, which are difficult to probe experimentally, such as a surprisingly large fluctuation in the coordination number and the detailed mechanism by which the hydrogen donating water molecules move across the first and second shells, thereby causing this fluctuation. PMID:26400690

  3. Ab initio molecular dynamics of liquid water using embedded-fragment second-order many-body perturbation theory towards its accurate property prediction.

    PubMed

    Willow, Soohaeng Yoo; Salim, Michael A; Kim, Kwang S; Hirata, So

    2015-01-01

    A direct, simultaneous calculation of properties of a liquid using an ab initio electron-correlated theory has long been unthinkable. Here we present structural, dynamical, and response properties of liquid water calculated by ab initio molecular dynamics using the embedded-fragment spin-component-scaled second-order many-body perturbation method with the aug-cc-pVDZ basis set. This level of theory is chosen as it accurately and inexpensively reproduces the water dimer potential energy surface from the coupled-cluster singles, doubles, and noniterative triples with the aug-cc-pVQZ basis set, which is nearly exact. The calculated radial distribution function, self-diffusion coefficient, coordinate number, and dipole moment, as well as the infrared and Raman spectra are in excellent agreement with experimental results. The shapes and widths of the OH stretching bands in the infrared and Raman spectra and their isotropic-anisotropic Raman noncoincidence, which reflect the diverse local hydrogen-bond environment, are also reproduced computationally. The simulation also reveals intriguing dynamic features of the environment, which are difficult to probe experimentally, such as a surprisingly large fluctuation in the coordination number and the detailed mechanism by which the hydrogen donating water molecules move across the first and second shells, thereby causing this fluctuation.

  4. Stable, high-order SBP-SAT finite difference operators to enable accurate simulation of compressible turbulent flows on curvilinear grids, with application to predicting turbulent jet noise

    NASA Astrophysics Data System (ADS)

    Byun, Jaeseung; Bodony, Daniel; Pantano, Carlos

    2014-11-01

    Improved order-of-accuracy discretizations often require careful consideration of their numerical stability. We report on new high-order finite difference schemes using Summation-By-Parts (SBP) operators along with the Simultaneous-Approximation-Terms (SAT) boundary condition treatment for first and second-order spatial derivatives with variable coefficients. In particular, we present a highly accurate operator for SBP-SAT-based approximations of second-order derivatives with variable coefficients for Dirichlet and Neumann boundary conditions. These terms are responsible for approximating the physical dissipation of kinetic and thermal energy in a simulation, and contain grid metrics when the grid is curvilinear. Analysis using the Laplace transform method shows that strong stability is ensured with Dirichlet boundary conditions while weaker stability is obtained for Neumann boundary conditions. Furthermore, the benefits of the scheme is shown in the direct numerical simulation (DNS) of a Mach 1.5 compressible turbulent supersonic jet using curvilinear grids and skew-symmetric discretization. Particularly, we show that the improved methods allow minimization of the numerical filter often employed in these simulations and we discuss the qualities of the simulation.

  5. Accurate prediction of diradical chemistry from a single-reference density-matrix method: Model application to the bicyclobutane to gauche-1,3-butadiene isomerization

    SciTech Connect

    Bertels, Luke W.; Mazziotti, David A.

    2014-07-28

    Multireference correlation in diradical molecules can be captured by a single-reference 2-electron reduced-density-matrix (2-RDM) calculation with only single and double excitations in the 2-RDM parametrization. The 2-RDM parametrization is determined by N-representability conditions that are non-perturbative in their treatment of the electron correlation. Conventional single-reference wave function methods cannot describe the entanglement within diradical molecules without employing triple- and potentially even higher-order excitations of the mean-field determinant. In the isomerization of bicyclobutane to gauche-1,3-butadiene the parametric 2-RDM (p2-RDM) method predicts that the diradical disrotatory transition state is 58.9 kcal/mol above bicyclobutane. This barrier is in agreement with previous multireference calculations as well as recent Monte Carlo and higher-order coupled cluster calculations. The p2-RDM method predicts the Nth natural-orbital occupation number of the transition state to be 0.635, revealing its diradical character. The optimized geometry from the p2-RDM method differs in important details from the complete-active-space self-consistent-field geometry used in many previous studies including the Monte Carlo calculation.

  6. Decreases in Perceived Maternal Criticism Predict Improvement in Subthreshold Psychotic Symptoms in a Randomized Trial of Family-Focused Therapy for Individuals at Clinical High Risk for Psychosis

    PubMed Central

    O’Brien, Mary P.; Miklowitz, David J.; Cannon, Tyrone D.

    2015-01-01

    Perceived criticism (PC) is a measure of how much criticism from 1 family member “gets through” to another. PC ratings have been found to predict the course of psychotic disorders, but questions remain regarding whether psychosocial treatment can effectively decrease PC, and whether reductions in PC predict symptom improvement. In a sample of individuals at high risk for psychosis, we examined a) whether Family Focused Therapy for Clinical High-Risk (FFT-CHR), an 18-session intervention that consists of psychoeducation and training in communication and problem solving, brought about greater reductions in perceived maternal criticism, compared to a 3-session family psychoeducational intervention; and b) whether reductions in PC from baseline to 6-month reassessment predicted decreases in subthreshold positive symptoms of psychosis at 12-month follow-up. This study was conducted within a randomized controlled trial across 8 sites. The perceived criticism scale was completed by 90 families prior to treatment and by 41 families at 6-month reassessment. Evaluators, blind to treatment condition, rated subthreshold symptoms of psychosis at baseline, 6- and 12-month assessments. Perceived maternal criticism decreased from pre- to posttreatment for both treatment groups, and this change in criticism predicted decreases in subthreshold positive symptoms at 12-month follow-up. This study offers evidence that participation in structured family treatment is associated with improvement in perceptions of the family environment. Further, a brief measure of perceived criticism may be useful in predicting the future course of attenuated symptoms of psychosis for CHR youth. PMID:26168262

  7. Decreases in perceived maternal criticism predict improvement in subthreshold psychotic symptoms in a randomized trial of family-focused therapy for individuals at clinical high risk for psychosis.

    PubMed

    O'Brien, Mary P; Miklowitz, David J; Cannon, Tyrone D

    2015-12-01

    Perceived criticism (PC) is a measure of how much criticism from 1 family member "gets through" to another. PC ratings have been found to predict the course of psychotic disorders, but questions remain regarding whether psychosocial treatment can effectively decrease PC, and whether reductions in PC predict symptom improvement. In a sample of individuals at high risk for psychosis, we examined a) whether Family Focused Therapy for Clinical High-Risk (FFT-CHR), an 18-session intervention that consists of psychoeducation and training in communication and problem solving, brought about greater reductions in perceived maternal criticism, compared to a 3-session family psychoeducational intervention; and b) whether reductions in PC from baseline to 6-month reassessment predicted decreases in subthreshold positive symptoms of psychosis at 12-month follow-up. This study was conducted within a randomized controlled trial across 8 sites. The perceived criticism scale was completed by 90 families prior to treatment and by 41 families at 6-month reassessment. Evaluators, blind to treatment condition, rated subthreshold symptoms of psychosis at baseline, 6- and 12-month assessments. Perceived maternal criticism decreased from pre- to posttreatment for both treatment groups, and this change in criticism predicted decreases in subthreshold positive symptoms at 12-month follow-up. This study offers evidence that participation in structured family treatment is associated with improvement in perceptions of the family environment. Further, a brief measure of perceived criticism may be useful in predicting the future course of attenuated symptoms of psychosis for CHR youth.

  8. Severity scoring in the critically ill: part 2: maximizing value from outcome prediction scoring systems.

    PubMed

    Breslow, Michael J; Badawi, Omar

    2012-02-01

    Part 2 of this review of ICU scoring systems examines how scoring system data should be used to assess ICU performance. There often are two different consumers of these data: lCU clinicians and quality leaders who seek to identify opportunities to improve quality of care and operational efficiency, and regulators, payors, and consumers who want to compare performance across facilities. The former need to know how to garner maximal insight into their care practices; this includes understanding how length of stay (LOS) relates to quality, analyzing the behavior of different subpopulations, and following trends over time. Segregating patients into low-, medium-, and high-risk populations is especially helpful, because care issues and outcomes may differ across this severity continuum. Also, LOS behaves paradoxically in high-risk patients (survivors often have longer LOS than nonsurvivors); failure to examine this subgroup separately can penalize ICUs with superior outcomes. Consumers of benchmarking data often focus on a single score, the standardized mortality ratio (SMR). However, simple SMRs are disproportionately affected by outcomes in high-risk patients, and differences in population composition, even when performance is otherwise identical, can result in different SMRs. Future benchmarking must incorporate strategies to adjust for differences in population composition and report performance separately for low-, medium- and high-acuity patients. Moreover, because many ICUs lack the resources to care for high-acuity patients (predicted mortality >50%), decisions about where patients should receive care must consider both ICU performance scores and their capacity to care for different types of patients. PMID:22315120

  9. Severity scoring in the critically ill: part 2: maximizing value from outcome prediction scoring systems.

    PubMed

    Breslow, Michael J; Badawi, Omar

    2012-02-01

    Part 2 of this review of ICU scoring systems examines how scoring system data should be used to assess ICU performance. There often are two different consumers of these data: lCU clinicians and quality leaders who seek to identify opportunities to improve quality of care and operational efficiency, and regulators, payors, and consumers who want to compare performance across facilities. The former need to know how to garner maximal insight into their care practices; this includes understanding how length of stay (LOS) relates to quality, analyzing the behavior of different subpopulations, and following trends over time. Segregating patients into low-, medium-, and high-risk populations is especially helpful, because care issues and outcomes may differ across this severity continuum. Also, LOS behaves paradoxically in high-risk patients (survivors often have longer LOS than nonsurvivors); failure to examine this subgroup separately can penalize ICUs with superior outcomes. Consumers of benchmarking data often focus on a single score, the standardized mortality ratio (SMR). However, simple SMRs are disproportionately affected by outcomes in high-risk patients, and differences in population composition, even when performance is otherwise identical, can result in different SMRs. Future benchmarking must incorporate strategies to adjust for differences in population composition and report performance separately for low-, medium- and high-acuity patients. Moreover, because many ICUs lack the resources to care for high-acuity patients (predicted mortality >50%), decisions about where patients should receive care must consider both ICU performance scores and their capacity to care for different types of patients.

  10. SNP development from RNA-seq data in a nonmodel fish: how many individuals are needed for accurate allele frequency prediction?

    PubMed

    Schunter, C; Garza, J C; Macpherson, E; Pascual, M

    2014-01-01

    Single nucleotide polymorphisms (SNPs) are rapidly becoming the marker of choice in population genetics due to a variety of advantages relative to other markers, including higher genomic density, data quality, reproducibility and genotyping efficiency, as well as ease of portability between laboratories. Advances in sequencing technology and methodologies to reduce genomic representation have made the isolation of SNPs feasible for nonmodel organisms. RNA-seq is one such technique for the discovery of SNPs and development of markers for large-scale genotyping. Here, we report the development of 192 validated SNP markers for parentage analysis in Tripterygion delaisi (the black-faced blenny), a small rocky-shore fish from the Mediterranean Sea. RNA-seq data for 15 individual samples were used for SNP discovery by applying a series of selection criteria. Genotypes were then collected from 1599 individuals from the same population with the resulting loci. Differences in heterozygosity and allele frequencies were found between the two data sets. Heterozygosity was lower, on average, in the population sample, and the mean difference between the frequencies of particular alleles in the two data sets was 0.135 ± 0.100. We used bootstrap resampling of the sequence data to predict appropriate sample sizes for SNP discovery. As cDNA library production is time-consuming and expensive, we suggest that using seven individuals for RNA sequencing reduces the probability of discarding highly informative SNP loci, due to lack of observed polymorphism, whereas use of more than 12 samples does not considerably improve prediction of true allele frequencies.

  11. Is scoring system of computed tomography based metric parameters can accurately predicts shock wave lithotripsy stone-free rates and aid in the development of treatment strategies?

    PubMed Central

    Badran, Yasser Ali; Abdelaziz, Alsayed Saad; Shehab, Mohamed Ahmed; Mohamed, Hazem Abdelsabour Dief; Emara, Absel-Aziz Ali; Elnabtity, Ali Mohamed Ali; Ghanem, Maged Mohammed; ELHelaly, Hesham Abdel Azim

    2016-01-01

    Objective: The objective was to determine the predicting success of shock wave lithotripsy (SWL) using a combination of computed tomography based metric parameters to improve the treatment plan. Patients and Methods: Consecutive 180 patients with symptomatic upper urinary tract calculi 20 mm or less were enrolled in our study underwent extracorporeal SWL were divided into two main groups, according to the stone size, Group A (92 patients with stone ≤10 mm) and Group B (88 patients with stone >10 mm). Both groups were evaluated, according to the skin to stone distance (SSD) and Hounsfield units (≤500, 500–1000 and >1000 HU). Results: Both groups were comparable in baseline data and stone characteristics. About 92.3% of Group A rendered stone-free, whereas 77.2% were stone-free in Group B (P = 0.001). Furthermore, in both group SWL success rates was a significantly higher for stones with lower attenuation <830 HU than with stones >830 HU (P < 0.034). SSD were statistically differences in SWL outcome (P < 0.02). Simultaneous consideration of three parameters stone size, stone attenuation value, and SSD; we found that stone-free rate (SFR) was 100% for stone attenuation value <830 HU for stone <10 mm or >10 mm but total number SWL sessions and shock waves required for the larger stone group were higher than in the smaller group (P < 0.01). Furthermore, SFR was 83.3% and 37.5% for stone <10 mm, mean HU >830, SSD 90 mm and SSD >120 mm, respectively. On the other hand, SFR was 52.6% and 28.57% for stone >10 mm, mean HU >830, SSD <90 mm and SSD >120 mm, respectively. Conclusion: Stone size, stone density (HU), and SSD is simple to calculate and can be reported by radiologists to applying combined score help to augment predictive power of SWL, reduce cost, and improving of treatment strategies. PMID:27141192

  12. Target Highlights in CASP9: Experimental Target Structures for the Critical Assessment of Techniques for Protein Structure Prediction

    PubMed Central

    Kryshtafovych, Andriy; Moult, John; Bartual, Sergio G.; Bazan, J. Fernando; Berman, Helen; Casteel, Darren E.; Christodoulou, Evangelos; Everett, John K.; Hausmann, Jens; Heidebrecht, Tatjana; Hills, Tanya; Hui, Raymond; Hunt, John F.; Jayaraman, Seetharaman; Joachimiak, Andrzej; Kennedy, Michael A.; Kim, Choel; Lingel, Andreas; Michalska, Karolina; Montelione, Gaetano T.; Otero, José M.; Perrakis, Anastassis; Pizarro, Juan C.; van Raaij, Mark J.; Ramelot, Theresa A.; Rousseau, Francois; Tong, Liang; Wernimont, Amy K.; Young, Jasmine; Schwede, Torsten

    2011-01-01

    One goal of the CASP Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction is to identify the current state of the art in protein structure prediction and modeling. A fundamental principle of CASP is blind prediction on a set of relevant protein targets, i.e. the participating computational methods are tested on a common set of experimental target proteins, for which the experimental structures are not known at the time of modeling. Therefore, the CASP experiment would not have been possible without broad support of the experimental protein structural biology community. In this manuscript, several experimental groups discuss the structures of the proteins which they provided as prediction targets for CASP9, highlighting structural and functional peculiarities of these structures: the long tail fibre protein gp37 from bacteriophage T4, the cyclic GMP-dependent protein kinase Iβ (PKGIβ) dimerization/docking domain, the ectodomain of the JTB (Jumping Translocation Breakpoint) transmembrane receptor, Autotaxin (ATX) in complex with an inhibitor, the DNA-Binding J-Binding Protein 1 (JBP1) domain essential for biosynthesis and maintenance of DNA base-J (β-D-glucosyl-hydroxymethyluracil) in Trypanosoma and Leishmania, an so far uncharacterized 73 residue domain from Ruminococcus gnavus with a fold typical for PDZ-like domains, a domain from the Phycobilisome (PBS) core-membrane linker (LCM) phycobiliprotein ApcE from Synechocystis, the Heat shock protein 90 (Hsp90) activators PFC0360w and PFC0270w from Plasmodium falciparum, and 2-oxo-3-deoxygalactonate kinase from Klebsiella pneumoniae. PMID:22020785

  13. Investigation into the Prediction Level of Professional Values of Prospective Teachers within the Context of Critical Thinking, Metacognition and Epistemological Beliefs in Turkey

    ERIC Educational Resources Information Center

    Demir, Özden; Doganay, Ahmet; Kaya, Halil Ibrahim

    2016-01-01

    The general aim of the present study is to identify to what extent the professional values of prospective teachers are predicted by the variables of critical thinking, metacognition, epistemological beliefs. The study also aims to determine which variables provide a better prediction of the professional values of prospective teachers than the…

  14. Accurate prediction of hard-sphere virial coefficients B6 to B12 from a compressibility-based equation of state

    NASA Astrophysics Data System (ADS)

    Hansen-Goos, Hendrik

    2016-04-01

    We derive an analytical equation of state for the hard-sphere fluid that is within 0.01% of computer simulations for the whole range of the stable fluid phase. In contrast, the commonly used Carnahan-Starling equation of state deviates by up to 0.3% from simulations. The derivation uses the functional form of the isothermal compressibility from the Percus-Yevick closure of the Ornstein-Zernike relation as a starting point. Two additional degrees of freedom are introduced, which are constrained by requiring the equation of state to (i) recover the exact fourth virial coefficient B4 and (ii) involve only integer coefficients on the level of the ideal gas, while providing best possible agreement with the numerical result for B5. Virial coefficients B6 to B10 obtained from the equation of state are within 0.5% of numerical computations, and coefficients B11 and B12 are within the error of numerical results. We conjecture that even higher virial coefficients are reliably predicted.

  15. Accurate predictions of spectroscopic and molecular properties of 27 Λ-S and 73 Ω states of AsS radical.

    PubMed

    Shi, Deheng; Song, Ziyue; Niu, Xianghong; Sun, Jinfeng; Zhu, Zunlue

    2016-01-15

    The PECs are calculated for the 27 Λ-S states and their corresponding 73 Ω states of AsS radical. Of these Λ-S states, only the 2(2)Δ and 5(4)Π states are replulsive. The 1(2)Σ(+), 2(2)Σ(+), 4(2)Π, 3(4)Δ, 3(4)Σ(+), and 4(4)Π states possess double wells. The 3(2)Σ(+) state possesses three wells. The A(2)Π, 3(2)Π, 1(2)Φ, 2(4)Π, 3(4)Π, 2(4)Δ, 3(4)Δ, 1(6)Σ(+), and 1(6)Π states are inverted with the SO coupling effect included. The 1(4)Σ(+), 2(4)Σ(+), 2(4)Σ(-), 2(4)Δ, 1(4)Φ, 1(6)Σ(+), and 1(6)Π states, the second wells of 1(2)Σ(+), 3(4)Σ(+), 4(2)Π, 4(4)Π, and 3(4)Δ states, and the third well of 3(2)Σ(+) state are very weakly-bound states. The PECs are extrapolated to the CBS limit. The effect of SO coupling on the PECs is discussed. The spectroscopic parameters are evaluated, and compared with available measurements and other theoretical ones. The vibrational properties of several weakly-bound states are determined. The spectroscopic properties reported here can be expected to be reliably predicted ones.

  16. Accurate predictions of spectroscopic and molecular properties of 27 Λ-S and 73 Ω states of AsS radical

    NASA Astrophysics Data System (ADS)

    Shi, Deheng; Song, Ziyue; Niu, Xianghong; Sun, Jinfeng; Zhu, Zunlue

    2016-01-01

    The PECs are calculated for the 27 Λ-S states and their corresponding 73 Ω states of AsS radical. Of these Λ-S states, only the 22Δ and 54Π states are replulsive. The 12Σ+, 22Σ+, 42Π, 34Δ, 34Σ+, and 44Π states possess double wells. The 32Σ+ state possesses three wells. The A2Π, 32Π, 12Φ, 24Π, 34Π, 24Δ, 34Δ, 16Σ+, and 16Π states are inverted with the SO coupling effect included. The 14Σ+, 24Σ+, 24Σ-, 24Δ, 14Φ, 16Σ+, and 16Π states, the second wells of 12Σ+, 34Σ+, 42Π, 44Π, and 34Δ states, and the third well of 32Σ+ state are very weakly-bound states. The PECs are extrapolated to the CBS limit. The effect of SO coupling on the PECs is discussed. The spectroscopic parameters are evaluated, and compared with available measurements and other theoretical ones. The vibrational properties of several weakly-bound states are determined. The spectroscopic properties reported here can be expected to be reliably predicted ones.

  17. Vmax estimate from three-parameter critical velocity models: validity and impact on 800 m running performance prediction.

    PubMed

    Bosquet, Laurent; Duchene, Antoine; Lecot, François; Dupont, Grégory; Leger, Luc

    2006-05-01

    The purpose of this study was to evaluate the validity of maximal velocity (Vmax) estimated from three-parameter systems models, and to compare the predictive value of two- and three-parameter models for the 800 m. Seventeen trained male subjects (VO2max=66.54+/-7.29 ml min(-1) kg(-1)) performed five randomly ordered constant velocity tests (CVT), a maximal velocity test (mean velocity over the last 10 m portion of a 40 m sprint) and a 800 m time trial (V 800 m). Five systems models (two three-parameter and three two-parameter) were used to compute V max (three-parameter models), critical velocity (CV), anaerobic running capacity (ARC) and V800m from times to exhaustion during CVT. Vmax estimates were significantly lower than (0.19Critical velocity (CV) alone explained 40-62% of the variance in V800m. Combining CV with other parameters of each model to produce a calculated V800m resulted in a clear improvement of this relationship (0.83predictive value for short duration events such as the 800 m, the fact the Vmax is not associated with the ability it is supposed to reflect suggests that they are more empirical than systems models.

  18. An Approach for Validating Actinide and Fission Product Burnup Credit Criticality Safety Analyses-Isotopic Composition Predictions

    SciTech Connect

    Radulescu, Georgeta; Gauld, Ian C; Ilas, Germina; Wagner, John C

    2011-01-01

    The expanded use of burnup credit in the United States (U.S.) for storage and transport casks, particularly in the acceptance of credit for fission products, has been constrained by the availability of experimental fission product data to support code validation. The U.S. Nuclear Regulatory Commission (NRC) staff has noted that the rationale for restricting the Interim Staff Guidance on burnup credit for storage and transportation casks (ISG-8) to actinide-only is based largely on the lack of clear, definitive experiments that can be used to estimate the bias and uncertainty for computational analyses associated with using burnup credit. To address the issues of burnup credit criticality validation, the NRC initiated a project with the Oak Ridge National Laboratory to (1) develop and establish a technically sound validation approach for commercial spent nuclear fuel (SNF) criticality safety evaluations based on best-available data and methods and (2) apply the approach for representative SNF storage and transport configurations/conditions to demonstrate its usage and applicability, as well as to provide reference bias results. The purpose of this paper is to describe the isotopic composition (depletion) validation approach and resulting observations and recommendations. Validation of the criticality calculations is addressed in a companion paper at this conference. For isotopic composition validation, the approach is to determine burnup-dependent bias and uncertainty in the effective neutron multiplication factor (keff) due to bias and uncertainty in isotopic predictions, via comparisons of isotopic composition predictions (calculated) and measured isotopic compositions from destructive radiochemical assay utilizing as much assay data as is available, and a best-estimate Monte Carlo based method. This paper (1) provides a detailed description of the burnup credit isotopic validation approach and its technical bases, (2) describes the application of the approach for

  19. [Critical evaluation and predictive value of clinical presentation in out-patients with acute community-acquired pneumonia].

    PubMed

    Mayaud, C; Fartoukh, M; Prigent, H; Parrot, A; Cadranel, J

    2006-01-01

    Diagnostic probability of community-acquired pneumonia (CAP) depends on data related to age and clinical and radiological findings. The critical evaluation of data in the literature leads to the following conclusions: 1) the prevalence of CAP in a given population with acute respiratory disease is 5% in outpatients and 10% in an emergency care unit. This could be as low as 2% in young people and even higher than 40% in hospitalized elderly patients; 2) the collection of clinical data is linked to the way the patient is examined and to the expertise of the clinician. The absolute lack of "vital signs" has a good negative predictive value in CAP; presence of unilateral crackles has a good positive predictive value; 3) there is a wide range of X-ray abnormalities: localized alveolar opacities; interstitial opacities, limited of diffused. The greatest radiological difficulties are encountered in old people with disorders including chronic respiratory or cardiac opacities and as a consequence of the high prevalence of bronchopneumonia episodes at this age; 4) among patients with lower respiratory tract (LRT) infections, the blood levels of leukocytes, CRP and procalcitonine are higher in CAP patients, mainly when their disease has a bacterial origin. Since you have not a threshold value reliably demonstrated in large populations with LRT infections or acute respiratory disease, presence or absence of these parameters could only be taken as a slight hint for a CAP diagnosis. PMID:17084571

  20. Predicting the toxicity of sediment-associated trace metals with simultaneously extracted trace metal: Acid-volatile sulfide concentrations and dry weight-normalized concentrations: A critical comparison

    USGS Publications Warehouse

    Long, E.R.; MacDonald, D.D.; Cubbage, J.C.; Ingersoll, C.G.

    1998-01-01

    The relative abilities of sediment concentrations of simultaneously extracted trace metal: acid-volatile sulfide (SEM:AVS) and dry weight- normalized trace metals to correctly predict both toxicity and nontoxicity were compared by analysis of 77 field-collected samples. Relative to the SEM:AVS concentrations, sediment guidelines based upon dry weight-normalized concentrations were equally or slightly more accurate in predicting both nontoxic and toxic results in laboratory tests.

  1. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  2. Ischemia-modified albumin levels in the prediction of acute critical neurological findings in carbon monoxide poisoning.

    PubMed

    Daş, Murat; Çevik, Yunsur; Erel, Özcan; Çorbacioğlu, Şeref Kerem

    2016-04-01

    The aim of the study was to determine whether serum ischemia-modified albumin (IMA) levels in patients with carbon monoxide (CO) poisoning were higher compared with a control group of healthy volunteers. In addition, the study sought to determine if there was a correlation between serum IMA levels and carboxyhemoglobin (COHB) levels and other critical neurological findings (CNFs). In this prospective study, the IMA levels of 100 patients with CO poisoning and 50 control individuals were compared. In addition, the IMA and COHB levels were analyzed according to absence or presence CNFs in patients with CO poisoning. The levels of IMA (mg/dL) on admittance, and during the 1(st) hour and 3(rd) hour, in patients with CO poisoning (49.90 ± 35.43, 30.21 ± 14.81, and 21.87 ± 6.03) were significantly higher, compared with the control individuals (17.30 ± 2.88). The levels of IMA in the 6(th) hour were not higher compared with control individuals. The levels of IMA on admittance, and during the 1(st) hour, 3(rd) hour, and 6(th) hour, and COHB (%) levels in patients who had CNFs were higher compared with IMA levels and COHB levels in patients who had no CNFs (p < 0.001). However, when the multivariate model was created, it was observed that IMA level on admittance was a poor indicator for prediction of CNFs (odds ratio = 1.05; 95% confidence interval, 1.01-1.08). We therefore concluded that serum IMA levels could be helpful in the diagnosis of CO poisoning. However, we believe that IMA levels cannot be used to predict which patients will develop CNFs due to CO poisoning.

  3. Can Selforganizing Maps Accurately Predict Photometric Redshifts?

    NASA Technical Reports Server (NTRS)

    Way, Michael J.; Klose, Christian

    2012-01-01

    We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using delta(z) = z(sub phot) - z(sub spec)) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods

  4. Prediction of Limb Salvage after Therapeutic Angiogenesis by Autologous Bone Marrow Cell Implantation in Patients with Critical Limb Ischemia

    PubMed Central

    Tara, Shuhei; Miyamoto, Masaaki; Takagi, Gen; Fukushima, Yoshimitsu; Kirinoki-ichikawa, Sonoko; Takano, Hitoshi; Takagi, Ikuyo; Mizuno, Hiroshi; Yasutake, Masahiro; Kumita, Shinichiro; Mizuno, Kyoichi

    2011-01-01

    Purpose: Despite advances in therapeutic angiogenesis by bone marrow cell implantation (BMCI), limb amputation remains a major unfavorable outcome in patients with critical limb ischemia (CLI). We sought to identify predictor(s) of limb salvage in CLI patients who received BMCI. Materials and Methods: Nineteen patients with CLI who treated by BMCI were divided into two groups; four patients with above-the-ankle amputation by 12 weeks after BMCI (amputation group) and the remaining 15 patients without (salvage group). We performed several blood-flow examinations before BMCI. Ankle-brachial index (ABI) was measured with the standard method. Transcutaneous oxygen tension (TcPO2) was measured at the dorsum of the foot, in the absence (baseline) and presence (maximum TcPO2) of oxygen inhalation. 99mtechnetium-tetrofosmin (99mTc-TF) perfusion index was determined at the foot and lower leg as the ratio of brain. Results: Maximum TcPO2 (p = 0.031) and 99mTc-TF perfusion index in the foot (p = 0.0068) was significantly higher in the salvage group than in the amputation group. Receiver operating characteristic (ROC) curve analysis identified maximum TcPO2 and 99mTc-TF perfusion index in the foot as having high predictive accuracy for limb salvage. Conclusion: Maximum TcPO2 and 99mTc-TF perfusion index in the foot are promising predictors of limb salvage after BMCI in CLI. PMID:23555423

  5. Statistical analysis of accurate prediction of local atmospheric optical attenuation with a new model according to weather together with beam wandering compensation system: a season-wise experimental investigation

    NASA Astrophysics Data System (ADS)

    Arockia Bazil Raj, A.; Padmavathi, S.

    2016-07-01

    Atmospheric parameters strongly affect the performance of Free Space Optical Communication (FSOC) system when the optical wave is propagating through the inhomogeneous turbulent medium. Developing a model to get an accurate prediction of optical attenuation according to meteorological parameters becomes significant to understand the behaviour of FSOC channel during different seasons. A dedicated free space optical link experimental set-up is developed for the range of 0.5 km at an altitude of 15.25 m. The diurnal profile of received power and corresponding meteorological parameters are continuously measured using the developed optoelectronic assembly and weather station, respectively, and stored in a data logging computer. Measured meteorological parameters (as input factors) and optical attenuation (as response factor) of size [177147 × 4] are used for linear regression analysis and to design the mathematical model that is more suitable to predict the atmospheric optical attenuation at our test field. A model that exhibits the R2 value of 98.76% and average percentage deviation of 1.59% is considered for practical implementation. The prediction accuracy of the proposed model is investigated along with the comparative results obtained from some of the existing models in terms of Root Mean Square Error (RMSE) during different local seasons in one-year period. The average RMSE value of 0.043-dB/km is obtained in the longer range dynamic of meteorological parameters variations.

  6. Prediction of {sup 2}D Rydberg energy levels of {sup 6}Li and {sup 7}Li based on very accurate quantum mechanical calculations performed with explicitly correlated Gaussian functions

    SciTech Connect

    Bubin, Sergiy; Sharkey, Keeper L.; Adamowicz, Ludwik

    2013-04-28

    Very accurate variational nonrelativistic finite-nuclear-mass calculations employing all-electron explicitly correlated Gaussian basis functions are carried out for six Rydberg {sup 2}D states (1s{sup 2}nd, n= 6, Horizontal-Ellipsis , 11) of the {sup 7}Li and {sup 6}Li isotopes. The exponential parameters of the Gaussian functions are optimized using the variational method with the aid of the analytical energy gradient determined with respect to these parameters. The experimental results for the lower states (n= 3, Horizontal-Ellipsis , 6) and the calculated results for the higher states (n= 7, Horizontal-Ellipsis , 11) fitted with quantum-defect-like formulas are used to predict the energies of {sup 2}D 1s{sup 2}nd states for {sup 7}Li and {sup 6}Li with n up to 30.

  7. UTILIZATION OF A PBPK MODEL TO PREDICT THE DISTRIBUTION OF 2, 3, 7-8 TETRACHLORODIBENZO-P-DIOXIN (TCDD) IN HUMANS DURING CRITICAL WINDOWS OF DEVELOPMENT

    EPA Science Inventory

    Utilization of A PBPK model to predict the distribution of 2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) in humans during critical windows of development.
    C Emond1, MJ DeVito2 and LS Birnbaum2
    1National Research Council, US EPA, ORD, NHEERL, (ETD, PK), RTP, NC, 27711, USA 2 US...

  8. Gradient liquid chromatographic retention time prediction for suspect screening applications: A critical assessment of a generalised artificial neural network-based approach across 10 multi-residue reversed-phase analytical methods.

    PubMed

    Barron, Leon P; McEneff, Gillian L

    2016-01-15

    For the first time, the performance of a generalised artificial neural network (ANN) approach for the prediction of 2492 chromatographic retention times (tR) is presented for a total of 1117 chemically diverse compounds present in a range of complex matrices and across 10 gradient reversed-phase liquid chromatography-(high resolution) mass spectrometry methods. Probabilistic, generalised regression, radial basis function as well as 2- and 3-layer multilayer perceptron-type neural networks were investigated to determine the most robust and accurate model for this purpose. Multi-layer perceptrons most frequently yielded the best correlations in 8 out of 10 methods. Averaged correlations of predicted versus measured tR across all methods were R(2)=0.918, 0.924 and 0.898 for the training, verification and test sets respectively. Predictions of blind test compounds (n=8-84 cases) resulted in an average absolute accuracy of 1.02±0.54min for all methods. Within this variation, absolute accuracy was observed to marginally improve for shorter runtimes, but was found to be relatively consistent with respect to analyte retention ranges (~5%). Finally, optimised and replicated network dependency on molecular descriptor data is presented and critically discussed across all methods. Overall, ANNs were considered especially suitable for suspects screening applications and could potentially be utilised in bracketed-type analyses in combination with high resolution mass spectrometry. PMID:26592605

  9. Gradient liquid chromatographic retention time prediction for suspect screening applications: A critical assessment of a generalised artificial neural network-based approach across 10 multi-residue reversed-phase analytical methods.

    PubMed

    Barron, Leon P; McEneff, Gillian L

    2016-01-15

    For the first time, the performance of a generalised artificial neural network (ANN) approach for the prediction of 2492 chromatographic retention times (tR) is presented for a total of 1117 chemically diverse compounds present in a range of complex matrices and across 10 gradient reversed-phase liquid chromatography-(high resolution) mass spectrometry methods. Probabilistic, generalised regression, radial basis function as well as 2- and 3-layer multilayer perceptron-type neural networks were investigated to determine the most robust and accurate model for this purpose. Multi-layer perceptrons most frequently yielded the best correlations in 8 out of 10 methods. Averaged correlations of predicted versus measured tR across all methods were R(2)=0.918, 0.924 and 0.898 for the training, verification and test sets respectively. Predictions of blind test compounds (n=8-84 cases) resulted in an average absolute accuracy of 1.02±0.54min for all methods. Within this variation, absolute accuracy was observed to marginally improve for shorter runtimes, but was found to be relatively consistent with respect to analyte retention ranges (~5%). Finally, optimised and replicated network dependency on molecular descriptor data is presented and critically discussed across all methods. Overall, ANNs were considered especially suitable for suspects screening applications and could potentially be utilised in bracketed-type analyses in combination with high resolution mass spectrometry.

  10. External Validation and Recalibration of Risk Prediction Models for Acute Traumatic Brain Injury among Critically Ill Adult Patients in the United Kingdom

    PubMed Central

    Griggs, Kathryn A.; Prabhu, Gita; Gomes, Manuel; Lecky, Fiona E.; Hutchinson, Peter J. A.; Menon, David K.; Rowan, Kathryn M.

    2015-01-01

    Abstract This study validates risk prediction models for acute traumatic brain injury (TBI) in critical care units in the United Kingdom and recalibrates the models to this population. The Risk Adjustment In Neurocritical care (RAIN) Study was a prospective, observational cohort study in 67 adult critical care units. Adult patients admitted to critical care following acute TBI with a last pre-sedation Glasgow Coma Scale score of less than 15 were recruited. The primary outcomes were mortality and unfavorable outcome (death or severe disability, assessed using the Extended Glasgow Outcome Scale) at six months following TBI. Of 3626 critical care unit admissions, 2975 were analyzed. Following imputation of missing outcomes, mortality at six months was 25.7% and unfavorable outcome 57.4%. Ten risk prediction models were validated from Hukkelhoven and colleagues, the Medical Research Council (MRC) Corticosteroid Randomisation After Significant Head Injury (CRASH) Trial Collaborators, and the International Mission for Prognosis and Analysis of Clinical Trials in TBI (IMPACT) group. The model with the best discrimination was the IMPACT “Lab” model (C index, 0.779 for mortality and 0.713 for unfavorable outcome). This model was well calibrated for mortality at six months but substantially under-predicted the risk of unfavorable outcome. Recalibration of the models resulted in small improvements in discrimination and excellent calibration for all models. The risk prediction models demonstrated sufficient statistical performance to support their use in research and audit but fell below the level required to guide individual patient decision-making. The published models for unfavorable outcome at six months had poor calibration in the UK critical care setting and the models recalibrated to this setting should be used in future research. PMID:25898072

  11. External Validation and Recalibration of Risk Prediction Models for Acute Traumatic Brain Injury among Critically Ill Adult Patients in the United Kingdom.

    PubMed

    Harrison, David A; Griggs, Kathryn A; Prabhu, Gita; Gomes, Manuel; Lecky, Fiona E; Hutchinson, Peter J A; Menon, David K; Rowan, Kathryn M

    2015-10-01

    This study validates risk prediction models for acute traumatic brain injury (TBI) in critical care units in the United Kingdom and recalibrates the models to this population. The Risk Adjustment In Neurocritical care (RAIN) Study was a prospective, observational cohort study in 67 adult critical care units. Adult patients admitted to critical care following acute TBI with a last pre-sedation Glasgow Coma Scale score of less than 15 were recruited. The primary outcomes were mortality and unfavorable outcome (death or severe disability, assessed using the Extended Glasgow Outcome Scale) at six months following TBI. Of 3626 critical care unit admissions, 2975 were analyzed. Following imputation of missing outcomes, mortality at six months was 25.7% and unfavorable outcome 57.4%. Ten risk prediction models were validated from Hukkelhoven and colleagues, the Medical Research Council (MRC) Corticosteroid Randomisation After Significant Head Injury (CRASH) Trial Collaborators, and the International Mission for Prognosis and Analysis of Clinical Trials in TBI (IMPACT) group. The model with the best discrimination was the IMPACT "Lab" model (C index, 0.779 for mortality and 0.713 for unfavorable outcome). This model was well calibrated for mortality at six months but substantially under-predicted the risk of unfavorable outcome. Recalibration of the models resulted in small improvements in discrimination and excellent calibration for all models. The risk prediction models demonstrated sufficient statistical performance to support their use in research and audit but fell below the level required to guide individual patient decision-making. The published models for unfavorable outcome at six months had poor calibration in the UK critical care setting and the models recalibrated to this setting should be used in future research.

  12. Expanding the range for predicting critical flow rates of gas wells producing from normally pressured waterdrive reservoirs

    SciTech Connect

    Upchurch, E.R. )

    1989-08-01

    The critical flow rate of a gas well is the minimum flow rate required to prevent accumulation of liquids in the tubing. Theoretical models currently available for estimating critical flow rates are restricted to wells with water/gas ratios less than 150bbl/MMcf (0.84 X 10/sup -3/ m/sup 3//m/sup 3/). For wells producing at higher water/gas ratios from normally pressured waterdrive reservoirs, a method of estimating critical flow rates is derived through use of an empirical multiphase-flow correlation.

  13. Prognosis Can Be Predicted More Accurately Using Pre- and Postchemoradiotherapy Carcinoembryonic Antigen Levels Compared to Only Prechemoradiotherapy Carcinoembryonic Antigen Level in Locally Advanced Rectal Cancer Patients Who Received Neoadjuvant Chemoradiotherapy

    PubMed Central

    Sung, SooYoon; Son, Seok Hyun; Kay, Chul Seung; Lee, Yoon Suk

    2016-01-01

    Abstract We aimed to evaluate the prognostic value of a change in the carcinoembryonic antigen (CEA) level during neoadjuvant chemoradiotherapy (nCRT) in patients with locally advanced rectal cancer. A total of 110 patients with clinical T3/T4 or node-positive disease underwent nCRT and curative total mesorectal resection from February 2006 to December 2013. Serum CEA level was measured before nCRT, after nCRT, and then again after surgery. A cut-off value for CEA level to predict prognosis was determined using the maximally selected log-rank test. According to the test, patients were classified into 3 groups, based on their CEA levels (Group A: pre-CRT CEA ≤3.2; Group B: pre-CRT CEA level >3.2 and post-CRT CEA ≤2.8; and Group C: pre-CRT CEA >3.2 and post-CRT CEA >2.8). The median follow-up time was 31.1 months. The 3-year disease-free survival (DFS) rates of Group A and Group B were similar, while Group C showed a significantly lower 3-year DFS rate (82.5% vs. 89.5% vs. 55.1%, respectively, P = 0.001). Other clinicopathological factors that showed statistical significance on univariate analysis were pre-CRT CEA, post-CRT CEA, tumor distance from the anal verge, surgery type, downstage, pathologic N stage, margin status and perineural invasion. The CEA group (P = 0.001) and tumor distance from the anal verge (P = 0.044) were significant prognostic factors for DFS on multivariate analysis. Post-CRT CEA level may be a useful prognostic factor in patients whose prognosis cannot be predicted exactly by pre-CRT CEA levels alone in the neoadjuvant treatment era. Combined pre-CRT CEA and post-CRT CEA levels enable us to predict prognosis more accurately and determine treatment and follow-up policies. Further large-scale studies are necessary to validate the prognostic value of CEA levels. PMID:26962798

  14. Summary Report of Laboratory Critical Experiment Analyses Performed for the Disposal Criticality Analysis Methodology

    SciTech Connect

    J. Scaglione

    1999-09-09

    This report, ''Summary Report of Laboratory Critical Experiment Analyses Performed for the Disposal Criticality Analysis Methodology'', contains a summary of the laboratory critical experiment (LCE) analyses used to support the validation of the disposal criticality analysis methodology. The objective of this report is to present a summary of the LCE analyses' results. These results demonstrate the ability of MCNP to accurately predict the critical multiplication factor (keff) for fuel with different configurations. Results from the LCE evaluations will support the development and validation of the criticality models used in the disposal criticality analysis methodology. These models and their validation have been discussed in the ''Disposal Criticality Analysis Methodology Topical Report'' (CRWMS M&O 1998a).

  15. pRIFLE (Pediatric Risk, Injury, Failure, Loss, End Stage Renal Disease) score identifies Acute Kidney Injury and predicts mortality in critically ill children : a prospective study

    PubMed Central

    Soler, Yadira A.; Nieves-Plaza, Mariely; Prieto, Mónica; García-De Jesús, Ricardo; Suárez-Rivera, Marta

    2014-01-01

    Objectives 1) To determine whether pRIFLE (Pediatric Risk, Injury, Failure, Loss, End Stage Renal Disease) criteria serves to characterize the pattern of Acute Kidney Injury (AKI) in critically ill pediatric patients; and 2) to identify if pRIFLE score will predict morbidity and mortality in our patient´s cohort. Design Prospective Cohort. Setting Multidisciplinary, tertiary care, 10- bed PICU. Patients 266 patients admitted to PICU from November 2009 to November 2010. Interventions None. Measurements and Main Results The incidence of AKI in the PICU was 27.4%, of which 83.5% presented within 72hrs of admission to the PICU. Patients with AKI were younger, weighed less, were more likely to be on in fluid overload ≥10%, and were more likely to be on inotropic support, diuretics or amino glycosides. No difference in gender, use of other nephrotoxins, or mechanical ventilation was observed. Fluid overload ≥10% was an independent predictor of morbidity and mortality. In multivariate analysis, AKI-Injury and Failure categories, as defined by pRIFLE, predicted mortality, hospital length of stay, and PICU length of stay. Conclusions In this cohort of critically ill pediatric patients, AKI identified by pRIFLE and fluid overload ≥ 10% predicted increased morbidity and mortality. Implementation of pRIFLE scoring and close monitoring of fluid overload upon admission may help develop early interventions to prevent and treat AKI in critically ill children. PMID:23439463

  16. Final Technical Report - Stochastic Nonlinear Data-Reduction Methods with Detection and Prediction of Critical Rare Event

    SciTech Connect

    Karniadakis, George Em; Vanden-Eijnden, Eric; Lin, Guang; Wan, Xiaoliang

    2013-04-03

    In this project, the collective efforts of all co-PIs aim to address three current limitations in modeling stochastic systems: (1) the inputs are mostly based on ad hoc models, (2) the number of independent parameters is very high, and (3) rare and critical events are difficult to capture with existing algorithms

  17. Fast, Accurate RF Propagation Modeling and Simulation Tool for Highly Cluttered Environments

    SciTech Connect

    Kuruganti, Phani Teja

    2007-01-01

    As network centric warfare and distributed operations paradigms unfold, there is a need for robust, fast wireless network deployment tools. These tools must take into consideration the terrain of the operating theater, and facilitate specific modeling of end to end network performance based on accurate RF propagation predictions. It is well known that empirical models can not provide accurate, site specific predictions of radio channel behavior. In this paper an event-driven wave propagation simulation is proposed as a computationally efficient technique for predicting critical propagation characteristics of RF signals in cluttered environments. Convincing validation and simulator performance studies confirm the suitability of this method for indoor and urban area RF channel modeling. By integrating our RF propagation prediction tool, RCSIM, with popular packetlevel network simulators, we are able to construct an end to end network analysis tool for wireless networks operated in built-up urban areas.

  18. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  19. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  20. Accurate and Accidental Empathy.

    ERIC Educational Resources Information Center

    Chandler, Michael

    The author offers two controversial criticisms of what are rapidly becoming standard assessment procedures for the measurement of empathic skill. First, he asserts that assessment procedures which attend exclusively to the accuracy with which subjects are able to characterize other people's feelings provide little or no useful information about…

  1. Using discharge data to reduce structural deficits in a hydrological model with a Bayesian inference approach and the implications for the prediction of critical source areas

    NASA Astrophysics Data System (ADS)

    Frey, M. P.; Stamm, C.; Schneider, M. K.; Reichert, P.

    2011-12-01

    A distributed hydrological model was used to simulate the distribution of fast runoff formation as a proxy for critical source areas for herbicide pollution in a small agricultural catchment in Switzerland. We tested to what degree predictions based on prior knowledge without local measurements could be improved upon relying on observed discharge. This learning process consisted of five steps: For the prior prediction (step 1), knowledge of the model parameters was coarse and predictions were fairly uncertain. In the second step, discharge data were used to update the prior parameter distribution. Effects of uncertainty in input data and model structure were accounted for by an autoregressive error model. This step decreased the width of the marginal distributions of parameters describing the lower boundary (percolation rates) but hardly affected soil hydraulic parameters. Residual analysis (step 3) revealed model structure deficits. We modified the model, and in the subsequent Bayesian updating (step 4) the widths of the posterior marginal distributions were reduced for most parameters compared to those of the prior. This incremental procedure led to a strong reduction in the uncertainty of the spatial prediction. Thus, despite only using spatially integrated data (discharge), the spatially distributed effect of the improved model structure can be expected to improve the spatially distributed predictions also. The fifth step consisted of a test with independent spatial data on herbicide losses and revealed ambiguous results. The comparison depended critically on the ratio of event to preevent water that was discharged. This ratio cannot be estimated from hydrological data only. The results demonstrate that the value of local data is strongly dependent on a correct model structure. An iterative procedure of Bayesian updating, model testing, and model modification is suggested.

  2. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  3. The timing of the human circadian clock is accurately represented by the core body temperature rhythm following phase shifts to a three-cycle light stimulus near the critical zone

    NASA Technical Reports Server (NTRS)

    Jewett, M. E.; Duffy, J. F.; Czeisler, C. A.

    2000-01-01

    A double-stimulus experiment was conducted to evaluate the phase of the underlying circadian clock following light-induced phase shifts of the human circadian system. Circadian phase was assayed by constant routine from the rhythm in core body temperature before and after a three-cycle bright-light stimulus applied near the estimated minimum of the core body temperature rhythm. An identical, consecutive three-cycle light stimulus was then applied, and phase was reassessed. Phase shifts to these consecutive stimuli were no different from those obtained in a previous study following light stimuli applied under steady-state conditions over a range of circadian phases similar to those at which the consecutive stimuli were applied. These data suggest that circadian phase shifts of the core body temperature rhythm in response to a three-cycle stimulus occur within 24 h following the end of the 3-day light stimulus and that this poststimulus temperature rhythm accurately reflects the timing of the underlying circadian clock.

  4. Will the future lie in multitude? A critical appraisal of biomarker panel studies on prediction of diabetic kidney disease progression.

    PubMed

    Schutte, Elise; Gansevoort, Ron T; Benner, Jacqueline; Lutgers, Helen L; Lambers Heerspink, Hiddo J

    2015-08-01

    Diabetic kidney disease is diagnosed and staged by albuminuria and estimated glomerular filtration rate. Although albuminuria has strong predictive power for renal function decline, there is still variability in the rate of renal disease progression across individuals that are not fully captured by the level of albuminuria. Therefore, research focuses on discovering and validating additional biomarkers that improve risk stratification for future renal function decline and end-stage renal disease in patients with diabetes, on top of established biomarkers. Most studies address the value of single biomarkers to predict progressive renal disease and aim to understand the mechanisms that underlie accelerated renal function decline. Since diabetic kidney disease is a disease encompassing several pathophysiological processes, a combination of biomarkers may be more likely to improve risk prediction than a single biomarker. In this review, we provide an overview of studies on the use of multiple biomarkers and biomarker panels, appraise their study design, discuss methodological pitfalls and make recommendations for future biomarker panel studies.

  5. Computational DNA hole spectroscopy: A new tool to predict mutation hotspots, critical base pairs, and disease ‘driver’ mutations

    PubMed Central

    Suárez, Martha Y.; Villagrán; Miller, John H.

    2015-01-01

    We report on a new technique, computational DNA hole spectroscopy, which creates spectra of electron hole probabilities vs. nucleotide position. A hole is a site of positive charge created when an electron is removed. Peaks in the hole spectrum depict sites where holes tend to localize and potentially trigger a base pair mismatch during replication. Our studies of mitochondrial DNA reveal a correlation between L-strand hole spectrum peaks and spikes in the human mutation spectrum. Importantly, we also find that hole peak positions that do not coincide with large variant frequencies often coincide with disease-implicated mutations and/or (for coding DNA) encoded conserved amino acids. This enables combining hole spectra with variant data to identify critical base pairs and potential disease ‘driver’ mutations. Such integration of DNA hole and variance spectra could ultimately prove invaluable for pinpointing critical regions of the vast non-protein-coding genome. An observed asymmetry in correlations, between the spectrum of human mtDNA variations and the L- and H-strand hole spectra, is attributed to asymmetric DNA replication processes that occur for the leading and lagging strands. PMID:26310834

  6. The theory of planned behaviour as a framework for predicting sexual risk behaviour in sub-Saharan African youth: A critical review.

    PubMed

    Protogerou, Cleo; Flisher, Alan J; Aarø, Leif Edvard; Mathews, Catherine

    2012-07-01

    Amongst the psychological theories that have been used to help understand why people have unprotected sex, the Theory of Planned Behaviour (TPB: Ajzen 1991) has earned a prominent position. This article is a critical review of 11 peer-reviewed studies conducted in sub-Saharan Africa during 2001 to 2009, which used the TPB as a model of predicting sexual risk behaviour in young people. All the studies revealed the predictive ability of the TPB in urban, rural, and traditional African settings, with R (2) coefficients ranging between 0.14 and 0.67. With data comparing favourably to those obtained in the international literature, these studies indicate that the TPB can be used to study sexual risk intentions and behaviour in sub-Saharan African youth, and question arguments against the theory's use in non-Western settings. PMID:25865835

  7. A comparison of equilibrium partitioning and critical body residue approaches for predicting toxicity of sediment-associated fluoranthene to freshwater amphipods

    SciTech Connect

    Driscoll, S.K.; Landrum, P.F.

    1997-10-01

    Equilibrium partitioning (EqP) theory predicts that the effects of organic compounds in sediments can be assessed by comparison of organic carbon-normalized sediment concentrations and estimated pore-water concentrations to effects determined in water-only exposures. A complementary approach, the critical body residue (CBR) theory, examines actual body burdens in relation to toxic effects. Critical body residue theory predicts that the narcotic effects of nonpolar compounds should be essentially constant for similar organisms, and narcosis should be observed at body burdens of 2 to 8 {micro}mol/g tissue. This study compares these two approaches for predicting toxicity of the polycyclic aromatic hydrocarbon (PAH) fluoranthene. The freshwater amphipods Hyalella azteca and Diporeia spp. were exposed for up to 30 d to sediment spiked with radiolabeled fluoranthene at concentrations of 0.1 (trace) to 3.940 nmol/g dry weight (= 346 {micro}mol/g organic carbon). Mean survival of Diporeia was generally high (>70%) and not significantly different from that of control animals. This result agrees with EqP predictions, because little mortality was observed for Diporeia in 10-d water-only exposures to fluoranthene in previous studies. After 10-d exposures, mortality of H. azteca was not significantly different from that of controls, even though measured interstitial water concentrations exceeded the previously determined 10-d water-only median lethal concentration (LC50). Equilibrium partitioning overpredicted fluoranthene sediment toxicity in this species. More mortality was observed for H. azteca at later time points, and a 16-d LC50 of 3.550 nmol/g dry weight sediment (291 {micro}mol/g organic carbon) was determined. A body burden of 1.10 {micro}mol fluoranthene-equivalents/g wet weight in H. azteca was associated with 50% mortality after 16-d exposures. Body burdens as high as 5.9 {micro}mol/g wet weight resulted in little mortality in Diporeia.

  8. The use of mathematical models to predict beach behavior for U.S. coastal engineering: A critical review

    USGS Publications Warehouse

    Thieler, E.R.; Pilkey, O.H.; Young, R.S.; Bush, D.M.; Chai, F.

    2000-01-01

    A number of assumed empirical relationships (e.g., the Bruun Rule, the equilibrium shoreface profile, longshore transport rate equation, beach length: durability relationship, and the renourishment factor) and deterministic numerical models (e.g., GENESIS, SBEACH) have become important tools for investigating coastal processes and for coastal engineering design in the U.S. They are also used as the basis for making public policy decisions, such as the feasibility of nourishing recreational beaches. A review of the foundations of these relationships and models, however, suggests that they are inadequate for the tasks for which they are used. Many of the assumptions used in analytical and numerical models are not valid in the context of modern oceanographic and geologic principles. We believe the models are oversimplifications of complex systems that are poorly understood. There are several reasons for this, including: (1) poor assumptions and important omissions in model formulation; (2) the use of relationships of questionable validity to predict the morphologic response to physical forcing; (3) the lack of hindsighting and objective evaluation of beach behavior predictions for engineering projects; (4) the incorrect use of model calibration and verification as assertions of model veracity; and (5) the fundamental inability to predict coastal evolution quantitatively at the engineering and planning time and space scales our society assumes and demands. It is essential that coastal geologists, beach designers and coastal modelers understand these model limitations. Each important model assumption must be examined in isolation; incorporating them into a model does not improve their validity. It is our belief that the models reviewed here should not be relied on as a design tool until they have been substantially modified and proven in real-world situations. The 'solution,' however, is not to increase the complexity of a model by increasing the number of variables

  9. Looking for biological factors to predict the risk of active cytomegalovirus infection in non-immunosuppressed critically ill patients.

    PubMed

    Bravo, Dayana; Clari, María A; Aguilar, Gerardo; Belda, Javier; Giménez, Estela; Carbonell, José A; Henao, Liliana; Navarro, David

    2014-05-01

    The identification of non-immunosuppressed critically ill patients most at risk for developing cytomegalovirus (CMV) reactivation is potentially of great clinical relevance. The current study was aimed at determining (i) whether single nucleotide polymorphisms in the genes coding for chemokine receptor 5 (CCR5), interleukin-10 IL-10), and monocyte chemoattractant protein-1 (MCP-1) have an impact on the incidence rate of active CMV infection, (ii) whether serum levels of CMV-specific IgGs are associated with the risk of CMV reactivation, and (iii) whether detection of CMV DNA in saliva precedes that in the lower respiratory tract or the blood compartment. A total of 36 out of 78 patients (46%) developed an episode of active CMV infection. The incidence rate of active CMV infection was not significantly associated with any single nucleotide polymorphisms. A trend towards a lower incidence of active CMV infection (P = 0.06) was noted in patients harboring the IL10 C/C genotype. Patients carrying the CCR5 A/A genotype had high CMV DNA loads in tracheal aspirates. The serum levels of CMV IgGs did not differ significantly between patients with a subsequent episode of active CMV infection (median, 217 IU/mL) or without one (median, 494 IU/mL). Detection of CMV DNA in saliva did not usually precede that in plasma and/or tracheal aspirates. In summary, the analysis of single nucleotide polymorphisms in the IL10 and CCR5 genes might help to determine the risk of active CMV infection or the level of CMV replication within episodes, respectively, in non-immunosuppressed critically ill patients.

  10. Addressing criticisms of existing predictive bias research: cognitive ability test scores still overpredict African Americans' job performance.

    PubMed

    Berry, Christopher M; Zhao, Peng

    2015-01-01

    Predictive bias studies have generally suggested that cognitive ability test scores overpredict job performance of African Americans, meaning these tests are not predictively biased against African Americans. However, at least 2 issues call into question existing over-/underprediction evidence: (a) a bias identified by Aguinis, Culpepper, and Pierce (2010) in the intercept test typically used to assess over-/underprediction and (b) a focus on the level of observed validity instead of operational validity. The present study developed and utilized a method of assessing over-/underprediction that draws on the math of subgroup regression intercept differences, does not rely on the biased intercept test, allows for analysis at the level of operational validity, and can use meta-analytic estimates as input values. Therefore, existing meta-analytic estimates of key parameters, corrected for relevant statistical artifacts, were used to determine whether African American job performance remains overpredicted at the level of operational validity. African American job performance was typically overpredicted by cognitive ability tests across levels of job complexity and across conditions wherein African American and White regression slopes did and did not differ. Because the present study does not rely on the biased intercept test and because appropriate statistical artifact corrections were carried out, the present study's results are not affected by the 2 issues mentioned above. The present study represents strong evidence that cognitive ability tests generally overpredict job performance of African Americans.

  11. Computational modelling of ovine critical-sized tibial defects with implanted scaffolds and prediction of the safety of fixator removal.

    PubMed

    Doyle, Heather; Lohfeld, Stefan; Dürselen, Lutz; McHugh, Peter

    2015-04-01

    Computational model geometries of tibial defects with two types of implanted tissue engineering scaffolds, β-tricalcium phosphate (β-TCP) and poly-ε-caprolactone (PCL)/β-TCP, are constructed from µ-CT scan images of the real in vivo defects. Simulations of each defect under four-point bending and under simulated in vivo axial compressive loading are performed. The mechanical stability of each defect is analysed using stress distribution analysis. The results of this analysis highlights the influence of callus volume, and both scaffold volume and stiffness, on the load-bearing abilities of these defects. Clinically-used image-based methods to predict the safety of removing external fixation are evaluated for each defect. Comparison of these measures with the results of computational analyses indicates that care must be taken in the interpretation of these measures.

  12. Critical Evaluation of Prediction Models for Phosphorus Partition between CaO-based Slags and Iron-based Melts during Dephosphorization Processes

    NASA Astrophysics Data System (ADS)

    Yang, Xue-Min; Li, Jin-Yan; Chai, Guo-Ming; Duan, Dong-Ping; Zhang, Jian

    2016-08-01

    According to the experimental results of hot metal dephosphorization by CaO-based slags at a commercial-scale hot metal pretreatment station, the collected 16 models of equilibrium quotient k_{{P}} or phosphorus partition L_{{P}} between CaO-based slags and iron-based melts from the literature have been evaluated. The collected 16 models for predicting equilibrium quotient k_{{P}} can be transferred to predict phosphorus partition L_{{P}} . The predicted results by the collected 16 models cannot be applied to be criteria for evaluating k_{{P}} or L_{{P}} due to various forms or definitions of k_{{P}} or L_{{P}} . Thus, the measured phosphorus content [pct P] in a hot metal bath at the end point of the dephosphorization pretreatment process is applied to be the fixed criteria for evaluating the collected 16 models. The collected 16 models can be described in the form of linear functions as y = c0 + c1 x , in which independent variable x represents the chemical composition of slags, intercept c0 including the constant term depicts the temperature effect and other unmentioned or acquiescent thermodynamic factors, and slope c1 is regressed by the experimental results of k_{{P}} or L_{{P}} . Thus, a general approach to developing the thermodynamic model for predicting equilibrium quotient k_{{P}} or phosphorus partition L P or [pct P] in iron-based melts during the dephosphorization process is proposed by revising the constant term in intercept c0 for the summarized 15 models except for Suito's model (M3). The better models with an ideal revising possibility or flexibility among the collected 16 models have been selected and recommended. Compared with the predicted result by the revised 15 models and Suito's model (M3), the developed IMCT- L_{{P}} model coupled with the proposed dephosphorization mechanism by the present authors can be applied to accurately predict phosphorus partition L_{{P}} with the lowest mean deviation δ_{{L_{{P}} }} of log L_{{P}} as 2.33, as

  13. A novel method for predicting critical flashover (CFO) voltages insulation strength of multiple dielectrics on distribution overhead lines

    SciTech Connect

    Shwehdi, M.H.; Shahzad, F.

    1996-12-31

    Electric utilities are striving to improve the appearance of distribution lines, apply different combinations of insulating components to establish the necessary insulation for such lines, by the use of new insulators and simultaneously reduce lightning outages. The impulse critical flashover (CFO) voltages of many overhead line insulators are determined for single and multiple (porcelain, fiberglass, polymers and wood). Laboratory investigation and studies relating to the evaluation of CFO values of distribution lines for multiple dielectrics were reported. Data used by the industry for transmission lines are not fully applicable to estimate CFO`s for distribution lines. Many engineers concerned with the design or operation of high voltage transmission lines have devised methods to estimate the performance of lightning impulse. There is at the present time no such method available on estimating insulation strengths of multiple dielectrics of distribution lines subjected to impulse CFO. This paper presents a method of estimating the CFO insulation strengths of two and three dielectric combinations used on distribution overhead lines using the developed Extended Multi Curves (EMC). The proper use and evaluation of the insulation level by this novel method has a major influence on the design and cost of distribution line construction, application, also improving the performance of specific line designs.

  14. Predictive value of cell-surface markers in infections in critically ill patients: protocol for an observational study (ImmuNe FailurE in Critical Therapy (INFECT) Study)

    PubMed Central

    Shankar-Hari, Manu; Weir, Christopher J; Rennie, Jillian; Antonelli, Jean; Rossi, Adriano G; Warner, Noel; Keenan, Jim; Wang, Alice; Brown, K Alun; Lewis, Sion; Mare, Tracey; Simpson, A John; Hulme, Gillian; Dimmick, Ian; Walsh, Timothy S

    2016-01-01

    Introduction Critically ill patients are at high risk of nosocomial infections, with between 20% and 40% of patients admitted to the intensive care unit (ICU) acquiring infections. These infections result in increased antibiotic use, and are associated with morbidity and mortality. Although critical illness is classically associated with hyperinflammation, the high rates of nosocomial infection argue for an importance of effect of impaired immunity. Our group recently demonstrated that a combination of 3 measures of immune cell function (namely neutrophil CD88, monocyte HLA-DR and % regulatory T cells) identified a patient population with a 2.4–5-fold greater risk for susceptibility to nosocomial infections. Methods and analysis This is a prospective, observational study to determine whether previously identified markers of susceptibility to nosocomial infection can be validated in a multicentre population, as well as testing several novel markers which may improve the risk of nosocomial infection prediction. Blood samples from critically ill patients (those admitted to the ICU for at least 48 hours and requiring mechanical ventilation alone or support of 2 or more organ systems) are taken and undergo whole blood staining for a range of immune cell surface markers. These samples undergo analysis on a standardised flow cytometry platform. Patients are followed up to determine whether they develop nosocomial infection. Infections need to meet strict prespecified criteria based on international guidelines; where these criteria are not met, an adjudication panel of experienced intensivists is asked to rule on the presence of infection. Secondary outcomes will be death from severe infection (sepsis) and change in organ failure. Ethics and dissemination Ethical approval including the involvement of adults lacking capacity has been obtained from respective English and Scottish Ethics Committees. Results will be disseminated through presentations at scientific meetings

  15. Organic matter-solid phase interactions are critical for predicting arsenic release and plant uptake in Bangladesh paddy soils.

    PubMed

    Williams, Paul N; Zhang, Hao; Davison, William; Meharg, Andrew A; Hossain, Mahmud; Norton, Gareth J; Brammer, Hugh; Islam, M Rafiqul

    2011-07-15

    Agroecological zones within Bangladesh with low levels of arsenic in groundwater and soils produce rice that is high in arsenic with respect to other producing regions of the globe. Little is known about arsenic cycling in these soils and the labile fractions relevant for plant uptake when flooded. Soil porewater dynamics of field soils (n = 39) were recreated under standardized laboratory conditions to investigate the mobility and interplay of arsenic, Fe, Si, C, and other elements, in relation to rice grain element composition, using the dynamic sampling technique diffusive gradients in thin films (DGT). Based on a simple model using only labile DGT measured arsenic and dissolved organic carbon (DOC), concentrations of arsenic in Aman (Monsoon season) rice grain were predicted reliably. DOC was the strongest determinant of arsenic solid-solution phase partitioning, while arsenic release to the soil porewater was shown to be decoupled from that of Fe. This study demonstrates the dual importance of organic matter (OM), in terms of enhancing arsenic release from soils, while reducing bioavailability by sequestering arsenic in solution. PMID:21692537

  16. Accurate Optical Reference Catalogs

    NASA Astrophysics Data System (ADS)

    Zacharias, N.

    2006-08-01

    Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.

  17. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  18. Technological Basis and Scientific Returns for Absolutely Accurate Measurements

    NASA Astrophysics Data System (ADS)

    Dykema, J. A.; Anderson, J.

    2011-12-01

    The 2006 NRC Decadal Survey fostered a new appreciation for societal objectives as a driving motivation for Earth science. Many high-priority societal objectives are dependent on predictions of weather and climate. These predictions are based on numerical models, which derive from approximate representations of well-founded physics and chemistry on space and timescales appropriate to global and regional prediction. These laws of chemistry and physics in turn have a well-defined quantitative relationship with physical measurement units, provided these measurement units are linked to international measurement standards that are the foundation of contemporary measurement science and standards for engineering and commerce. Without this linkage, measurements have an ambiguous relationship to scientific principles that introduces avoidable uncertainty in analyses, predictions, and improved understanding of the Earth system. Since the improvement of climate and weather prediction is fundamentally dependent on the improvement of the representation of physical processes, measurement systems that reduce the ambiguity between physical truth and observations represent an essential component of a national strategy for understanding and living with the Earth system. This paper examines the technological basis and potential science returns of sensors that make measurements that are quantitatively tied on-orbit to international measurement standards, and thus testable to systematic errors. This measurement strategy provides several distinct benefits. First, because of the quantitative relationship between these international measurement standards and fundamental physical constants, measurements of this type accurately capture the true physical and chemical behavior of the climate system and are not subject to adjustment due to excluded measurement physics or instrumental artifacts. In addition, such measurements can be reproduced by scientists anywhere in the world, at any time

  19. Predicting the molecular complexity of sequencing libraries.

    PubMed

    Daley, Timothy; Smith, Andrew D

    2013-04-01

    Predicting the molecular complexity of a genomic sequencing library is a critical but difficult problem in modern sequencing applications. Methods to determine how deeply to sequence to achieve complete coverage or to predict the benefits of additional sequencing are lacking. We introduce an empirical bayesian method to accurately characterize the molecular complexity of a DNA sample for almost any sequencing application on the basis of limited preliminary sequencing. PMID:23435259

  20. One-year mortality, quality of life and predicted life-time cost-utility in critically ill patients with acute respiratory failure

    PubMed Central

    2010-01-01

    Introduction High daily intensive care unit (ICU) costs are associated with the use of mechanical ventilation (MV) to treat acute respiratory failure (ARF), and assessment of quality of life (QOL) after critical illness and cost-effectiveness analyses are warranted. Methods Nationwide, prospective multicentre observational study in 25 Finnish ICUs. During an eight-week study period 958 consecutive adult ICU patients were treated with ventilatory support over 6 hours. Of those 958, 619 (64.6%) survived one year, of whom 288 (46.5%) answered the quality of life questionnaire (EQ-5D). We calculated EQ-5D index and predicted lifetime quality-adjusted life years (QALYs) gained using the age- and sex-matched life expectancy for survivors after one year. For expired patients the exact lifetime was used. We divided all hospital costs for all ARF patients by the number of hospital survivors, and by all predicted lifetime QALYs. We also adjusted for those who died before one year and for those with missing QOL to be able to estimate the total QALYs. Results One-year mortality was 35% (95% CI 32 to 38%). For the 288 respondents median [IQR] EQ-5D index after one year was lower than that of the age- and sex-matched general population 0.70 [0.45 to 0.89] vs. 0.84 [0.81 to 0.88]. For these 288, the mean (SD) predicted lifetime QALYs was 15.4 (13.3). After adjustment for missing QOL the mean predicted lifetime (SD) QALYs was 11.3 (13.0) for all the 958 ARF patients. The mean estimated costs were 20.739 € per hospital survivor, and mean predicted lifetime cost-utility for all ARF patients was 1391 € per QALY. Conclusions Despite lower health-related QOL compared to reference values, our result suggests that cost per hospital survivor and lifetime cost-utility remain reasonable regardless of age, disease severity, and type or duration of ventilation support in patients with ARF. PMID:20384998

  1. The performance of customised APACHE II and SAPS II in predicting mortality of mixed critically ill patients in a Thai medical intensive care unit.

    PubMed

    Khwannimit, B; Bhurayanontachai, R

    2009-09-01

    The aim of this study was to evaluate and compare the performance of customised Acute Physiology and Chronic Health Evaluation HII (APACHE II) and Simplified Acute Physiology Score HII (SAPS II) in predicting hospital mortality of mixed critically ill Thai patients in a medical intensive care unit. A prospective cohort study was conducted over a four-year period. The subjects were randomly divided into calibration and validation groups. Logistic regression analysis was used for customisation. The performance of the scores was evaluated by the discrimination, calibration and overall fit in the overall group and across subgroups in the validation group. Two thousand and forty consecutive intensive care unit admissions during the study period were split into two groups. Both customised models showed excellent discrimination. The area under the receiver operating characteristic curve of the customised APACHE II was greater than the customised SAPS II (0.925 and 0.892, P < 0.001). Hosmer-Lemeshow goodness-of-fit showed good calibration for the customised APACHE II in overall populations and various subgroups but insufficient calibration for the customised SAPS II. The customised SAPS II showed good calibration in only the younger, postoperative and sepsis patients subgroups. The overall performance of the customised APACHE II was better than the customised SAPS II (Brier score 0.089 and 0.109, respectively). Our results indicate that the customised APACHE II shows better performance than the customised SAPS II in predicting hospital mortality and could be used to predict mortality and quality assessment in our unit or other intensive care units with a similar case mix.

  2. Obstetric critical care: A prospective analysis of clinical characteristics, predictability, and fetomaternal outcome in a new dedicated obstetric intensive care unit.

    PubMed

    Gupta, Sunanda; Naithani, Udita; Doshi, Vimla; Bhargava, Vaibhav; Vijay, Bhavani S

    2011-03-01

    A 1 year prospective analysis of all critically ill obstetric patients admitted to a newly developed dedicated obstetric intensive care unit (ICU) was done in order to characterize causes of admissions, interventions required, course and foetal maternal outcome. Utilization of mortality probability model II (MPM II) at admission for predicting maternal mortality was also assessed. During this period there were 16,756 deliveries with 79 maternal deaths (maternal mortality rate 4.7/1000 deliveries). There were 24 ICU admissions (ICU utilization ratio 0.14%) with mean age of 25.21±4.075 years and mean gestational age of 36.04±3.862 weeks. Postpartum admissions were significantly higher (83.33% n=20, P<0.05) with more patients presenting with obstetric complications (91.66%, n=22, P<0.01) as compared to medical complications (8.32% n=2). Obstetric haemorrhage (n=15, 62.5%) and haemodynamic instability (n=20, 83.33%) were considered to be significant risk factors for ICU admission (P=0.000). Inotropic support was required in 22 patients (91.66%) while 17 patients (70.83%) required ventilatory support but they did not contribute to risk factors for poor outcome. The mean duration of ventilation (30.17±21.65 h) and ICU stay (39.42±33.70 h) were of significantly longer duration in survivors (P=0.01, P=0.00 respectively) versus non-survivors. The observed mortality (n=10, 41.67%) was significantly higher than MPM II predicted death rate (26.43%, P=0.002). We conclude that obstetric haemorrhage leading to haemodynamic instability remains the leading cause of ICU admission and MPM II scores at admission under predict the maternal mortality. PMID:21712871

  3. Superiority of Transcutaneous Oxygen Tension Measurements in Predicting Limb Salvage After Below-the-Knee Angioplasty: A Prospective Trial in Diabetic Patients With Critical Limb Ischemia

    SciTech Connect

    Redlich, Ulf; Xiong, Yan Y.; Pech, Maciej; Tautenhahn, Joerg; Halloul, Zuhir; Lobmann, Ralf; Adolf, Daniela; Ricke, Jens; Dudeck, Oliver

    2011-04-15

    Purpose: To assess postprocedural angiograms, the ankle-brachial index (ABI), and transcutaneous oxygen tension (TcPO{sub 2}) to predict outcome after infrageniculate angioplasty (PTA) in diabetic patients with critical limb ischemia (CLI) scheduled for amputation. Materials and Methods: PTA was performed in 28 diabetic patients with CLI confined to infrapopliteal vessels. We recorded patency of crural vessels, including the vascular supply of the foot as well as the ABI and TcPO{sub 2} of the foot. Results: Technical success rate was 92.9% (n = 26), and limb-salvage rate at 12 months was 60.7% (n = 17). The number of patent straight vessels above and below the level of the malleoli increased significantly in patients avoiding amputation. Amputation was unnecessary in 88.2% (n = 15) patients when patency of at least one tibial artery was achieved. In 72.7% (n = 8) of patients, patency of the peroneal artery alone was not sufficient for limb salvage. ABI was of no predictive value for limb salvage. TcPO{sub 2} values increased significantly only in patients not requiring amputation (P = 0.015). In patients with only one tibial artery supplying the foot or only a patent peroneal artery in postprocedural angiograms, TcPO{sub 2} was capable of reliably predicting the outcome. Conclusion: Below-the-knee PTA as an isolated part of therapy was effective to prevent major amputation in more than a half of diabetic patients with CLI. TcPO{sub 2} was a valid predictor for limb salvage, even when angiographic outcome criteria failed.

  4. Scaling for interfacial tensions near critical endpoints.

    PubMed

    Zinn, Shun-Yong; Fisher, Michael E

    2005-01-01

    Parametric scaling representations are obtained and studied for the asymptotic behavior of interfacial tensions in the full neighborhood of a fluid (or Ising-type) critical endpoint, i.e., as a function both of temperature and of density/order parameter or chemical potential/ordering field. Accurate nonclassical critical exponents and reliable estimates for the universal amplitude ratios are included naturally on the basis of the "extended de Gennes-Fisher" local-functional theory. Serious defects in previous scaling treatments are rectified and complete wetting behavior is represented; however, quantitatively small, but unphysical residual nonanalyticities on the wetting side of the critical isotherm are smoothed out "manually." Comparisons with the limited available observations are presented elsewhere but the theory invites new, searching experiments and simulations, e.g., for the vapor-liquid interfacial tension on the two sides of the critical endpoint isotherm for which an amplitude ratio -3.25+/-0.05 is predicted.

  5. Self-Assembly and Critical Aggregation Concentration Measurements of ABA Triblock Copolymers with Varying B Block Types: Model Development, Prediction, and Validation.

    PubMed

    Aydin, Fikret; Chu, Xiaolei; Uppaladadium, Geetartha; Devore, David; Goyal, Ritu; Murthy, N Sanjeeva; Zhang, Zheng; Kohn, Joachim; Dutt, Meenakshi

    2016-04-21

    The dissipative particle dynamics (DPD) simulation technique is a coarse-grained (CG) molecular dynamics-based approach that can effectively capture the hydrodynamics of complex systems while retaining essential information about the structural properties of the molecular species. An advantageous feature of DPD is that it utilizes soft repulsive interactions between the beads, which are CG representation of groups of atoms or molecules. In this study, we used the DPD simulation technique to study the aggregation characteristics of ABA triblock copolymers in aqueous medium. Pluronic polymers (PEG-PPO-PEG) were modeled as two segments of hydrophilic beads and one segment of hydrophobic beads. Tyrosine-derived PEG5K-b-oligo(desaminotyrosyl tyrosine octyl ester-suberate)-b-PEG5K (PEG5K-oligo(DTO-SA)-PEG5K) block copolymers possess alternate rigid and flexible components along the hydrophobic oligo(DTO-SA) chain, and were modeled as two segments of hydrophilic beads and one segment of hydrophobic, alternate soft and hard beads. The formation, structure, and morphology of the initial aggregation of the polymer molecules in aqueous medium were investigated by following the aggregation dynamics. The dimensions of the aggregates predicted by the computational approach were in good agreement with corresponding results from experiments, for the Pluronic and PEG5K-oligo(DTO-SA)-PEG5K block copolymers. In addition, DPD simulations were utilized to determine the critical aggregation concentration (CAC), which was compared with corresponding results from an experimental approach. For Pluronic polymers F68, F88, F108, and F127, the computational results agreed well with experimental measurements of the CAC measurements. For PEG5K-b-oligo(DTO-SA)-b-PEG5K block polymers, the complexity in polymer structure made it difficult to directly determine their CAC values via the CG scheme. Therefore, we determined CAC values of a series of triblock copolymers with 3-8 DTO-SA units using DPD

  6. Predicting hydration Gibbs energies of alkyl-aromatics using molecular simulation: a comparison of current force fields and the development of a new parameter set for accurate solvation data.

    PubMed

    Garrido, Nuno M; Jorge, Miguel; Queimada, António J; Gomes, José R B; Economou, Ioannis G; Macedo, Eugénia A

    2011-10-14

    The Gibbs energy of hydration is an important quantity to understand the molecular behavior in aqueous systems at constant temperature and pressure. In this work we review the performance of some popular force fields, namely TraPPE, OPLS-AA and Gromos, in reproducing the experimental Gibbs energies of hydration of several alkyl-aromatic compounds--benzene, mono-, di- and tri-substituted alkylbenzenes--using molecular simulation techniques. In the second part of the paper, we report a new model that is able to improve such hydration energy predictions, based on Lennard Jones parameters from the recent TraPPE-EH force field and atomic partial charges obtained from natural population analysis of density functional theory calculations. We apply a scaling factor determined by fitting the experimental hydration energy of only two solutes, and then present a simple rule to generate atomic partial charges for different substituted alkyl-aromatics. This rule has the added advantages of eliminating the unnecessary assumption of fixed charge on every substituted carbon atom and providing a simple guideline for extrapolating the charge assignment to any multi-substituted alkyl-aromatic molecule. The point charges derived here yield excellent predictions of experimental Gibbs energies of hydration, with an overall absolute average deviation of less than 0.6 kJ mol(-1). This new parameter set can also give good predictive performance for other thermodynamic properties and liquid structural information.

  7. Serial measurement of hFABP and high-sensitivity troponin I post-PCI in STEMI: how fast and accurate can myocardial infarct size and no-reflow be predicted?

    PubMed

    Uitterdijk, André; Sneep, Stefan; van Duin, Richard W B; Krabbendam-Peters, Ilona; Gorsse-Bakker, Charlotte; Duncker, Dirk J; van der Giessen, Willem J; van Beusekom, Heleen M M

    2013-10-01

    The objective of this study was to compare heart-specific fatty acid binding protein (hFABP) and high-sensitivity troponin I (hsTnI) via serial measurements to identify early time points to accurately quantify infarct size and no-reflow in a preclinical swine model of ST-elevated myocardial infarction (STEMI). Myocardial necrosis, usually confirmed by hsTnI or TnT, takes several hours of ischemia before plasma levels rise in the absence of reperfusion. We evaluated the fast marker hFABP compared with hsTnI to estimate infarct size and no-reflow upon reperfused (2 h occlusion) and nonreperfused (8 h occlusion) STEMI in swine. In STEMI (n = 4) and STEMI + reperfusion (n = 8) induced in swine, serial blood samples were taken for hFABP and hsTnI and compared with triphenyl tetrazolium chloride and thioflavin-S staining for infarct size and no-reflow at the time of euthanasia. hFABP increased faster than hsTnI upon occlusion (82 ± 29 vs. 180 ± 73 min, P < 0.05) and increased immediately upon reperfusion while hsTnI release was delayed 16 ± 3 min (P < 0.05). Peak hFABP and hsTnI reperfusion values were reached at 30 ± 5 and 139 ± 21 min, respectively (P < 0.05). Infarct size (containing 84 ± 0.6% no-reflow) correlated well with area under the curve for hFABP (r(2) = 0.92) but less for hsTnI (r(2) = 0.53). At 50 and 60 min reperfusion, hFABP correlated best with infarct size (r(2) = 0.94 and 0.93) and no-reflow (r(2) = 0.96 and 0.94) and showed high sensitivity for myocardial necrosis (2.3 ± 0.6 and 0.4 ± 0.6 g). hFABP rises faster and correlates better with infarct size and no-reflow than hsTnI in STEMI + reperfusion when measured early after reperfusion. The highest sensitivity detecting myocardial necrosis, 0.4 ± 0.6 g at 60 min postreperfusion, provides an accurate and early measurement of infarct size and no-reflow.

  8. Medical image analysis methods in MR/CT-imaged acute-subacute ischemic stroke lesion: Segmentation, prediction and insights into dynamic evolution simulation models. A critical appraisal☆

    PubMed Central

    Rekik, Islem; Allassonnière, Stéphanie; Carpenter, Trevor K.; Wardlaw, Joanna M.

    2012-01-01

    Over the last 15 years, basic thresholding techniques in combination with standard statistical correlation-based data analysis tools have been widely used to investigate different aspects of evolution of acute or subacute to late stage ischemic stroke in both human and animal data. Yet, a wave of biology-dependent and imaging-dependent issues is still untackled pointing towards the key question: “how does an ischemic stroke evolve?” Paving the way for potential answers to this question, both magnetic resonance (MRI) and CT (computed tomography) images have been used to visualize the lesion extent, either with or without spatial distinction between dead and salvageable tissue. Combining diffusion and perfusion imaging modalities may provide the possibility of predicting further tissue recovery or eventual necrosis. Going beyond these basic thresholding techniques, in this critical appraisal, we explore different semi-automatic or fully automatic 2D/3D medical image analysis methods and mathematical models applied to human, animal (rats/rodents) and/or synthetic ischemic stroke to tackle one of the following three problems: (1) segmentation of infarcted and/or salvageable (also called penumbral) tissue, (2) prediction of final ischemic tissue fate (death or recovery) and (3) dynamic simulation of the lesion core and/or penumbra evolution. To highlight the key features in the reviewed segmentation and prediction methods, we propose a common categorization pattern. We also emphasize some key aspects of the methods such as the imaging modalities required to build and test the presented approach, the number of patients/animals or synthetic samples, the use of external user interaction and the methods of assessment (clinical or imaging-based). Furthermore, we investigate how any key difficulties, posed by the evolution of stroke such as swelling or reperfusion, were detected (or not) by each method. In the absence of any imaging-based macroscopic dynamic model

  9. Critical power derived from a 3-min all-out test predicts 16.1-km road time-trial performance.

    PubMed

    Black, Matthew I; Durant, Jacob; Jones, Andrew M; Vanhatalo, Anni

    2014-01-01

    It has been shown that the critical power (CP) in cycling estimated using a novel 3-min all-out protocol is reliable and closely matches the CP derived from conventional procedures. The purpose of this study was to assess the predictive validity of the all-out test CP estimate. We hypothesised that the all-out test CP would be significantly correlated with 16.1-km road time-trial (TT) performance and more strongly correlated with performance than the gas exchange threshold (GET), respiratory compensation point (RCP) and VO2 max. Ten club-level male cyclists (mean±SD: age 33.8±8.2 y, body mass 73.8±4.3 kg, VO2 max 60±4 ml·kg(-1)·min(-1)) performed a 10-mile road TT, a ramp incremental test to exhaustion, and two 3-min all-out tests, the first of which served as familiarisation. The 16.1-km TT performance (27.1±1.2 min) was significantly correlated with the CP (309±34 W; r = -0.83, P<0.01) and total work done during the all-out test (70.9±6.5 kJ; r = -0.86, P<0.01), the ramp incremental test peak power (433±30 W; r = -0.75, P<0.05) and the RCP (315±29 W; r = -0.68, P<0.05), but not with GET (151±32 W; r = -0.21) or the VO2 max (4.41±0.25 L·min(-1); r = -0.60). These data provide evidence for the predictive validity and practical performance relevance of the 3-min all-out test. The 3-min all-out test CP may represent a useful addition to the battery of tests employed by applied sport physiologists or coaches to track fitness and predict performance in atheletes.

  10. Fast and accurate determination of sites along the FUT2 in vitro transcript that are accessible to antisense oligonucleotides by application of secondary structure predictions and RNase H in combination with MALDI-TOF mass spectrometry

    PubMed Central

    Gabler, Angelika; Krebs, Stefan; Seichter, Doris; Förster, Martin

    2003-01-01

    Alteration of gene expression by use of antisense oligonucleotides has considerable potential for therapeutic purposes and scientific studies. Although applied for almost 25 years, this technique is still associated with difficulties in finding antisense-effective regions along the target mRNA. This is mainly due to strong secondary structures preventing binding of antisense oligonucleotides and RNase H, playing a major role in antisense-mediated degradation of the mRNA. These difficulties make empirical testing of a large number of sequences complementary to various sites in the target mRNA a very lengthy and troublesome procedure. To overcome this problem, more recent strategies to find efficient antisense sites are based on secondary structure prediction and RNase H-dependent mechanisms. We were the first who directly combined these two strategies; antisense oligonucleotides complementary to predicted unpaired target mRNA regions were designed and hybridized to the corresponding RNAs. Incubation with RNase H led to cleavage of the RNA at the respective hybridization sites. Analysis of the RNA fragments by matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry, which has not been used in this context before, allowed exact determination of the cleavage site. Thus the technique described here is very promising when searching for effective antisense sites. PMID:12888531

  11. Accurate theoretical chemistry with coupled pair models.

    PubMed

    Neese, Frank; Hansen, Andreas; Wennmohs, Frank; Grimme, Stefan

    2009-05-19

    Quantum chemistry has found its way into the everyday work of many experimental chemists. Calculations can predict the outcome of chemical reactions, afford insight into reaction mechanisms, and be used to interpret structure and bonding in molecules. Thus, contemporary theory offers tremendous opportunities in experimental chemical research. However, even with present-day computers and algorithms, we cannot solve the many particle Schrodinger equation exactly; inevitably some error is introduced in approximating the solutions of this equation. Thus, the accuracy of quantum chemical calculations is of critical importance. The affordable accuracy depends on molecular size and particularly on the total number of atoms: for orientation, ethanol has 9 atoms, aspirin 21 atoms, morphine 40 atoms, sildenafil 63 atoms, paclitaxel 113 atoms, insulin nearly 800 atoms, and quaternary hemoglobin almost 12,000 atoms. Currently, molecules with up to approximately 10 atoms can be very accurately studied by coupled cluster (CC) theory, approximately 100 atoms with second-order Møller-Plesset perturbation theory (MP2), approximately 1000 atoms with density functional theory (DFT), and beyond that number with semiempirical quantum chemistry and force-field methods. The overwhelming majority of present-day calculations in the 100-atom range use DFT. Although these methods have been very successful in quantum chemistry, they do not offer a well-defined hierarchy of calculations that allows one to systematically converge to the correct answer. Recently a number of rather spectacular failures of DFT methods have been found-even for seemingly simple systems such as hydrocarbons, fueling renewed interest in wave function-based methods that incorporate the relevant physics of electron correlation in a more systematic way. Thus, it would be highly desirable to fill the gap between 10 and 100 atoms with highly correlated ab initio methods. We have found that one of the earliest (and now

  12. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  13. Critical Combinations of Radiation Dose and Volume Predict Intelligence Quotient and Academic Achievement Scores After Craniospinal Irradiation in Children With Medulloblastoma

    SciTech Connect

    Merchant, Thomas E.; Schreiber, Jane E.; Wu, Shengjie; Lukose, Renin; Xiong, Xiaoping; Gajjar, Amar

    2014-11-01

    Purpose: To prospectively follow children treated with craniospinal irradiation to determine critical combinations of radiation dose and volume that would predict for cognitive effects. Methods and Materials: Between 1996 and 2003, 58 patients (median age 8.14 years, range 3.99-20.11 years) with medulloblastoma received risk-adapted craniospinal irradiation followed by dose-intense chemotherapy and were followed longitudinally with multiple cognitive evaluations (through 5 years after treatment) that included intelligence quotient (estimated intelligence quotient, full-scale, verbal, and performance) and academic achievement (math, reading, spelling) tests. Craniospinal irradiation consisted of 23.4 Gy for average-risk patients (nonmetastatic) and 36-39.6 Gy for high-risk patients (metastatic or residual disease >1.5 cm{sup 2}). The primary site was treated using conformal or intensity modulated radiation therapy using a 2-cm clinical target volume margin. The effect of clinical variables and radiation dose to different brain volumes were modeled to estimate cognitive scores after treatment. Results: A decline with time for all test scores was observed for the entire cohort. Sex, race, and cerebrospinal fluid shunt status had a significant impact on baseline scores. Age and mean radiation dose to specific brain volumes, including the temporal lobes and hippocampi, had a significant impact on longitudinal scores. Dichotomized dose distributions at 25 Gy, 35 Gy, 45 Gy, and 55 Gy were modeled to show the impact of the high-dose volume on longitudinal test scores. The 50% risk of a below-normal cognitive test score was calculated according to mean dose and dose intervals between 25 Gy and 55 Gy at 10-Gy increments according to brain volume and age. Conclusions: The ability to predict cognitive outcomes in children with medulloblastoma using dose-effects models for different brain subvolumes will improve treatment planning, guide intervention, and help

  14. Modeling the effects of light and temperature on algae growth: state of the art and critical assessment for productivity prediction during outdoor cultivation.

    PubMed

    Béchet, Quentin; Shilton, Andy; Guieysse, Benoit

    2013-12-01

    The ability to model algal productivity under transient conditions of light intensity and temperature is critical for assessing the profitability and sustainability of full-scale algae cultivation outdoors. However, a review of over 40 modeling approaches reveals that most of the models hitherto described in the literature have not been validated under conditions relevant to outdoor cultivation. With respect to light intensity, we therefore categorized and assessed these models based on their theoretical ability to account for the light gradients and short light cycles experienced in well-mixed dense outdoor cultures. Type I models were defined as models predicting the rate of photosynthesis of the entire culture as a function of the incident or average light intensity reaching the culture. Type II models were defined as models computing productivity as the sum of local productivities within the cultivation broth (based on the light intensity locally experienced by individual cells) without consideration of short light cycles. Type III models were then defined as models considering the impacts of both light gradients and short light cycles. Whereas Type I models are easy to implement, they are theoretically not applicable to outdoor systems outside the range of experimental conditions used for their development. By contrast, Type III models offer significant refinement but the complexity of the inputs needed currently restricts their practical application. We therefore propose that Type II models currently offer the best compromise between accuracy and practicability for full scale engineering application. With respect to temperature, we defined as "coupled" and "uncoupled" models the approaches which account and do not account for the potential interdependence of light and temperature on the rate of photosynthesis, respectively. Due to the high number of coefficients of coupled models and the associated risk of overfitting, the recommended approach is uncoupled

  15. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  16. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  17. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  18. Accurate Mass Measurements in Proteomics

    SciTech Connect

    Liu, Tao; Belov, Mikhail E.; Jaitly, Navdeep; Qian, Weijun; Smith, Richard D.

    2007-08-01

    To understand different aspects of life at the molecular level, one would think that ideally all components of specific processes should be individually isolated and studied in details. Reductionist approaches, i.e., studying one biological event at a one-gene or one-protein-at-a-time basis, indeed have made significant contributions to our understanding of many basic facts of biology. However, these individual “building blocks” can not be visualized as a comprehensive “model” of the life of cells, tissues, and organisms, without using more integrative approaches.1,2 For example, the emerging field of “systems biology” aims to quantify all of the components of a biological system to assess their interactions and to integrate diverse types of information obtainable from this system into models that could explain and predict behaviors.3-6 Recent breakthroughs in genomics, proteomics, and bioinformatics are making this daunting task a reality.7-14 Proteomics, the systematic study of the entire complement of proteins expressed by an organism, tissue, or cell under a specific set of conditions at a specific time (i.e., the proteome), has become an essential enabling component of systems biology. While the genome of an organism may be considered static over short timescales, the expression of that genome as the actual gene products (i.e., mRNAs and proteins) is a dynamic event that is constantly changing due to the influence of environmental and physiological conditions. Exclusive monitoring of the transcriptomes can be carried out using high-throughput cDNA microarray analysis,15-17 however the measured mRNA levels do not necessarily correlate strongly with the corresponding abundances of proteins,18-20 The actual amount of functional proteins can be altered significantly and become independent of mRNA levels as a result of post-translational modifications (PTMs),21 alternative splicing,22,23 and protein turnover.24,25 Moreover, the functions of expressed

  19. Accurate SHAPE-directed RNA structure determination

    PubMed Central

    Deigan, Katherine E.; Li, Tian W.; Mathews, David H.; Weeks, Kevin M.

    2009-01-01

    Almost all RNAs can fold to form extensive base-paired secondary structures. Many of these structures then modulate numerous fundamental elements of gene expression. Deducing these structure–function relationships requires that it be possible to predict RNA secondary structures accurately. However, RNA secondary structure prediction for large RNAs, such that a single predicted structure for a single sequence reliably represents the correct structure, has remained an unsolved problem. Here, we demonstrate that quantitative, nucleotide-resolution information from a SHAPE experiment can be interpreted as a pseudo-free energy change term and used to determine RNA secondary structure with high accuracy. Free energy minimization, by using SHAPE pseudo-free energies, in conjunction with nearest neighbor parameters, predicts the secondary structure of deproteinized Escherichia coli 16S rRNA (>1,300 nt) and a set of smaller RNAs (75–155 nt) with accuracies of up to 96–100%, which are comparable to the best accuracies achievable by comparative sequence analysis. PMID:19109441

  20. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    PubMed

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples.

  1. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    PubMed

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. PMID:27498635

  2. Can Self-Organizing Maps Accurately Predict Photometric Redshifts?

    NASA Astrophysics Data System (ADS)

    Way, M. J.; Klose, C. D.

    2012-03-01

    We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using Δz = zphot - zspec) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods.

  3. Ethics and epistemology of accurate prediction in clinical research.

    PubMed

    Hey, Spencer Phillips

    2015-07-01

    All major research ethics policies assert that the ethical review of clinical trial protocols should include a systematic assessment of risks and benefits. But despite this policy, protocols do not typically contain explicit probability statements about the likely risks or benefits involved in the proposed research. In this essay, I articulate a range of ethical and epistemic advantages that explicit forecasting would offer to the health research enterprise. I then consider how some particular confidence levels may come into conflict with the principles of ethical research.

  4. WGS accurately predicts antimicrobial resistance in Escherichia coli

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Objectives: To determine the effectiveness of whole-genome sequencing (WGS) in identifying resistance genotypes of multidrug-resistant Escherichia coli (E. coli) and whether these correlate with observed phenotypes. Methods: Seventy-six E. coli strains were isolated from farm cattle and measured f...

  5. Ethics and epistemology of accurate prediction in clinical research.

    PubMed

    Hey, Spencer Phillips

    2015-07-01

    All major research ethics policies assert that the ethical review of clinical trial protocols should include a systematic assessment of risks and benefits. But despite this policy, protocols do not typically contain explicit probability statements about the likely risks or benefits involved in the proposed research. In this essay, I articulate a range of ethical and epistemic advantages that explicit forecasting would offer to the health research enterprise. I then consider how some particular confidence levels may come into conflict with the principles of ethical research. PMID:25249375

  6. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  7. Accurate determination of segmented X-ray detector geometry

    PubMed Central

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A.; Chapman, Henry N.; Barty, Anton

    2015-01-01

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments. PMID:26561117

  8. Accurate determination of segmented X-ray detector geometry.

    PubMed

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A; Chapman, Henry N; Barty, Anton

    2015-11-01

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments.

  9. Accurate determination of segmented X-ray detector geometry.

    PubMed

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A; Chapman, Henry N; Barty, Anton

    2015-11-01

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments. PMID:26561117

  10. Accurate documentation and wound measurement.

    PubMed

    Hampton, Sylvie

    This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.

  11. Accurately measuring MPI broadcasts in a computational grid

    SciTech Connect

    Karonis N T; de Supinski, B R

    1999-05-06

    An MPI library's implementation of broadcast communication can significantly affect the performance of applications built with that library. In order to choose between similar implementations or to evaluate available libraries, accurate measurements of broadcast performance are required. As we demonstrate, existing methods for measuring broadcast performance are either inaccurate or inadequate. Fortunately, we have designed an accurate method for measuring broadcast performance, even in a challenging grid environment. Measuring broadcast performance is not easy. Simply sending one broadcast after another allows them to proceed through the network concurrently, thus resulting in inaccurate per broadcast timings. Existing methods either fail to eliminate this pipelining effect or eliminate it by introducing overheads that are as difficult to measure as the performance of the broadcast itself. This problem becomes even more challenging in grid environments. Latencies a long different links can vary significantly. Thus, an algorithm's performance is difficult to predict from it's communication pattern. Even when accurate pre-diction is possible, the pattern is often unknown. Our method introduces a measurable overhead to eliminate the pipelining effect, regardless of variations in link latencies. choose between different available implementations. Also, accurate and complete measurements could guide use of a given implementation to improve application performance. These choices will become even more important as grid-enabled MPI libraries [6, 7] become more common since bad choices are likely to cost significantly more in grid environments. In short, the distributed processing community needs accurate, succinct and complete measurements of collective communications performance. Since successive collective communications can often proceed concurrently, accurately measuring them is difficult. Some benchmarks use knowledge of the communication algorithm to predict the

  12. Nano-size scaling of alloy intra-particle vs. inter-particle separation transitions: prediction of distinctly interface-affected critical behaviour.

    PubMed

    Polak, M; Rubinovich, L

    2016-07-21

    Phase-separation second-order transitions in binary alloy particles consisting of ∼1000 up to ∼70 000 atoms (∼1-10 nm) are modeled focusing on the unexplored issue of finite-size scaling in such systems, particularly on evaluation of correlation-length critical exponents. Our statistical-thermodynamic approach is based on mean-field analytical expression for the Ising model free energy that facilitates highly efficient computations furnishing comprehensive data for fcc rectangular nanoparticles (NPs). These are summed up in intra- and inter-particle scaling plots as well as in nanophase separation diagrams. Temperature-induced variations in the interface thickness in Janus-type intra-particle configurations and NP size-dependent shifts in the critical temperature of their transition to solid-solution reflect power-law behavior with the same critical exponent, ν = 0.83. It is attributed to dominant interfacial effects that are absent in inter-particle transitions. Variations in ν with nano-size, as revealed by a refined analysis, are linearly extrapolated in order to bridge the gap to larger particles within and well beyond the nanoscale, ultimately yielding ν = 1.0. Besides these findings, the study indicates the key role of the surface-area to volume ratio as an effective linear size, revealing a universal, particle-shape independent, nanoscaling of the critical-temperature shifts. PMID:27338842

  13. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  14. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  15. Predictability of the Earth's polar motion

    NASA Technical Reports Server (NTRS)

    Chao, B. F.

    1984-01-01

    A comprehensive, experimental study of the predictability of the polar motion using a homogeneous BIH (Bureau International de l'Heure) data set is presented. Based on knowledge of the physics of the annual and the Chandler wobbles, the numerical model for the polar motion is constructed by allowing the wobble periods to vary. Using an optimum base length of 6 years for prediction, this floating-period model, equipped with a non-linear least-squares estimator, is found to yield polar motion predictions accurate from 0.012 to 0.024 inches depending on the prediction length up to one year, corresponding to a predictability of 91-83%. This represents a considerable improvement over the conventional fixed-period predictor, which does not respond to variations in the apparent wobble periods. The superiority of the floating-period predictor to other predictors based on critically different numerical models is also demonstrated.

  16. SPLASH: Accurate OH maser positions

    NASA Astrophysics Data System (ADS)

    Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney

    2013-10-01

    The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.

  17. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  18. Accurate thickness measurement of graphene.

    PubMed

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  19. Mathematical modelling of patient flows to predict critical care capacity required following the merger of two district general hospitals into one.

    PubMed

    Williams, J; Dumont, S; Parry-Jones, J; Komenda, I; Griffiths, J; Knight, V

    2015-01-01

    There is both medical and political drive to centralise secondary services in larger hospitals throughout the National Health Service. High-volume services in some areas of care have been shown to achieve better outcomes and efficiencies arising from economies of scale. We sought to produce a mathematical model using the historical critical care demand in two District General Hospitals to determine objectively the requisite critical care capacity in a newly built hospital. We also sought to determine how well the new single unit would be able to meet changes in demand. The intention is that the model should be generic and transferable for those looking to merge and rationalise services on to one site. One of the advantages of mathematical modelling is the ability to interrogate the model to investigate any number of different scenarios; some of these are presented. PMID:25267582

  20. Mathematical modelling of patient flows to predict critical care capacity required following the merger of two district general hospitals into one.

    PubMed

    Williams, J; Dumont, S; Parry-Jones, J; Komenda, I; Griffiths, J; Knight, V

    2015-01-01

    There is both medical and political drive to centralise secondary services in larger hospitals throughout the National Health Service. High-volume services in some areas of care have been shown to achieve better outcomes and efficiencies arising from economies of scale. We sought to produce a mathematical model using the historical critical care demand in two District General Hospitals to determine objectively the requisite critical care capacity in a newly built hospital. We also sought to determine how well the new single unit would be able to meet changes in demand. The intention is that the model should be generic and transferable for those looking to merge and rationalise services on to one site. One of the advantages of mathematical modelling is the ability to interrogate the model to investigate any number of different scenarios; some of these are presented.

  1. Predicting Individual Fuel Economy

    SciTech Connect

    Lin, Zhenhong; Greene, David L

    2011-01-01

    To make informed decisions about travel and vehicle purchase, consumers need unbiased and accurate information of the fuel economy they will actually obtain. In the past, the EPA fuel economy estimates based on its 1984 rules have been widely criticized for overestimating on-road fuel economy. In 2008, EPA adopted a new estimation rule. This study compares the usefulness of the EPA's 1984 and 2008 estimates based on their prediction bias and accuracy and attempts to improve the prediction of on-road fuel economies based on consumer and vehicle attributes. We examine the usefulness of the EPA fuel economy estimates using a large sample of self-reported on-road fuel economy data and develop an Individualized Model for more accurately predicting an individual driver's on-road fuel economy based on easily determined vehicle and driver attributes. Accuracy rather than bias appears to have limited the usefulness of the EPA 1984 estimates in predicting on-road MPG. The EPA 2008 estimates appear to be equally inaccurate and substantially more biased relative to the self-reported data. Furthermore, the 2008 estimates exhibit an underestimation bias that increases with increasing fuel economy, suggesting that the new numbers will tend to underestimate the real-world benefits of fuel economy and emissions standards. By including several simple driver and vehicle attributes, the Individualized Model reduces the unexplained variance by over 55% and the standard error by 33% based on an independent test sample. The additional explanatory variables can be easily provided by the individuals.

  2. Fluctuation theory of critical phenomena in fluids

    NASA Astrophysics Data System (ADS)

    Martynov, G. A.

    2016-07-01

    It is assumed that critical phenomena are generated by density wave fluctuations carrying a certain kinetic energy. It is noted that all coupling equations for critical indices are obtained within the context of this hypothesis. Critical indices are evaluated for 15 liquids more accurately than when using the current theory of critical phenomena.

  3. Painful Issues in Pain Prediction.

    PubMed

    Hu, Li; Iannetti, Gian Domenico

    2016-04-01

    How perception of pain emerges from neural activity is largely unknown. Identifying a neural 'pain signature' and deriving a way to predict perceived pain from brain activity would have enormous basic and clinical implications. Researchers are increasingly turning to functional brain imaging, often applying machine-learning algorithms to infer that pain perception occurred. Yet, such sophisticated analyses are fraught with interpretive difficulties. Here, we highlight some common and troublesome problems in the literature, and suggest methods to ensure researchers draw accurate conclusions from their results. Since functional brain imaging is increasingly finding practical applications with real-world consequences, it is critical to interpret brain scans accurately, because decisions based on neural data will only be as good as the science behind them. PMID:26898163

  4. Efficacy of the APACHE II score at ICU discharge in predicting post-ICU mortality and ICU readmission in critically ill surgical patients.

    PubMed

    Lee, H; Lim, C W; Hong, H P; Ju, J W; Jeon, Y T; Hwang, J W; Park, H P

    2015-03-01

    In this study, we evaluated the efficacy of the discharge Acute Physiology and Chronic Health Evaluation (APACHE) II score in predicting post-intensive care unit (ICU) mortality and ICU readmission during the same hospitalisation in a surgical ICU. Of 1190 patients who were admitted to the ICU and stayed >48 hours between October 2007 and March 2010, 23 (1.9%) died and 86 (7.2%) were readmitted after initial ICU discharge, with 26 (3.0%) admitted within 48 hours. The area under the receiver operating characteristics curve of the discharge and admission APACHE II scores in predicting in-hospital mortality was 0.631 (95% confidence interval [CI] 0.603 to 0.658) and 0.669 (95% CI 0.642 to 0.696), respectively (P=0.510). The area under the receiver operating characteristics curve of discharge and admission APACHE II scores for predicting all forms of readmission was 0.606 (95% CI 0.578 to 0.634) and 0.574 (95% CI 0.545 to 0.602), respectively (P=0.316). The area under the receiver operating characteristics curve of discharge APACHE II score in predicting early ICU readmissions was, however, higher than that of admission APACHE II score (0.688 [95% CI 0.660 to 0.714] versus 0.505 [95% CI 0.476 to 0.534], P=0.001). The discharge APACHE II score (odds ratio [OR] 1.1, 95% CI 1.01 to 1.22, P=0.024), unplanned ICU readmission (OR 20.0, 95% CI 7.6 to 53.1, P=0.001), eosinopenia at ICU discharge (OR 6.0, 95% CI 1.34 to 26.9, P=0.019), and hospital length-of-stay before ICU admission (OR 1.02, 95% CI 1.01 to 1.03, P=0.021) were significant independent factors in predicting post-ICU mortality. This study suggests that the discharge APACHE II score may be useful in predicting post-ICU mortality and is superior to the admission APACHE II score in predicting early ICU readmission in surgical ICU patients.

  5. Thinking Critically about Critical Thinking

    ERIC Educational Resources Information Center

    Mulnix, Jennifer Wilson

    2012-01-01

    As a philosophy professor, one of my central goals is to teach students to think critically. However, one difficulty with determining whether critical thinking can be taught, or even measured, is that there is widespread disagreement over what critical thinking actually is. Here, I reflect on several conceptions of critical thinking, subjecting…

  6. Critical assumptions: thinking critically about critical thinking.

    PubMed

    Riddell, Thelma

    2007-03-01

    The concept of critical thinking has been featured in nursing literature for the past 20 years. It has been described but not defined by both the American Association of Colleges of Nursing and the National League for Nursing, although their corresponding accreditation bodies require that critical thinking be included in nursing curricula. In addition, there is no reliable or valid measurement tool for critical thinking ability in nursing. As a result, there is a lack of research support for the assumptions that critical thinking can be learned and that critical thinking ability improves clinical competence. Brookfield suggested that commitments should be made only after a period of critically reflective analysis, during which the congruence between perceptions and reality are examined. In an evidence-based practice profession, we, as nurse educators, need to ask ourselves how we can defend our assumptions that critical thinking can be learned and that critical thinking improves the quality of nursing practice, especially when there is virtually no consensus on a definition.

  7. Accurately measuring dynamic coefficient of friction in ultraform finishing

    NASA Astrophysics Data System (ADS)

    Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.

    2013-09-01

    UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.

  8. Accurate Weather Forecasting for Radio Astronomy

    NASA Astrophysics Data System (ADS)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  9. Coulomb explosion in dicationic noble gas clusters: a genetic algorithm-based approach to critical size estimation for the suppression of Coulomb explosion and prediction of dissociation channels.

    PubMed

    Nandy, Subhajit; Chaudhury, Pinaki; Bhattacharyya, S P

    2010-06-21

    We present a genetic algorithm based investigation of structural fragmentation in dicationic noble gas clusters, Ar(n)(+2), Kr(n)(+2), and Xe(n)(+2), where n denotes the size of the cluster. Dications are predicted to be stable above a threshold size of the cluster when positive charges are assumed to remain localized on two noble gas atoms and the Lennard-Jones potential along with bare Coulomb and ion-induced dipole interactions are taken into account for describing the potential energy surface. Our cutoff values are close to those obtained experimentally [P. Scheier and T. D. Mark, J. Chem. Phys. 11, 3056 (1987)] and theoretically [J. G. Gay and B. J. Berne, Phys. Rev. Lett. 49, 194 (1982)]. When the charges are allowed to be equally distributed over four noble gas atoms in the cluster and the nonpolarization interaction terms are allowed to remain unchanged, our method successfully identifies the size threshold for stability as well as the nature of the channels of dissociation as function of cluster size. In Ar(n)(2+), for example, fissionlike fragmentation is predicted for n=55 while for n=43, the predicted outcome is nonfission fragmentation in complete agreement with earlier work [Golberg et al., J. Chem. Phys. 100, 8277 (1994)]. PMID:20572686

  10. Coulomb explosion in dicationic noble gas clusters: A genetic algorithm-based approach to critical size estimation for the suppression of Coulomb explosion and prediction of dissociation channels

    NASA Astrophysics Data System (ADS)

    Nandy, Subhajit; Chaudhury, Pinaki; Bhattacharyya, S. P.

    2010-06-01

    We present a genetic algorithm based investigation of structural fragmentation in dicationic noble gas clusters, Arn+2, Krn+2, and Xen+2, where n denotes the size of the cluster. Dications are predicted to be stable above a threshold size of the cluster when positive charges are assumed to remain localized on two noble gas atoms and the Lennard-Jones potential along with bare Coulomb and ion-induced dipole interactions are taken into account for describing the potential energy surface. Our cutoff values are close to those obtained experimentally [P. Scheier and T. D. Mark, J. Chem. Phys. 11, 3056 (1987)] and theoretically [J. G. Gay and B. J. Berne, Phys. Rev. Lett. 49, 194 (1982)]. When the charges are allowed to be equally distributed over four noble gas atoms in the cluster and the nonpolarization interaction terms are allowed to remain unchanged, our method successfully identifies the size threshold for stability as well as the nature of the channels of dissociation as function of cluster size. In Arn2+, for example, fissionlike fragmentation is predicted for n =55 while for n =43, the predicted outcome is nonfission fragmentation in complete agreement with earlier work [Golberg et al., J. Chem. Phys. 100, 8277 (1994)].

  11. Critical Care

    MedlinePlus

    Critical care helps people with life-threatening injuries and illnesses. It might treat problems such as complications from surgery, ... attention by a team of specially-trained health care providers. Critical care usually takes place in an ...

  12. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    ERIC Educational Resources Information Center

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  13. Archetypal Criticism.

    ERIC Educational Resources Information Center

    Chesebro, James W.; And Others

    1990-01-01

    Argues that archetypal criticism is a useful way of examining universal, historical, and cross-cultural symbols in classrooms. Identifies essential features of an archetype; outlines operational and critical procedures; illustrates archetypal criticism as applied to the cross as a symbol; and provides a synoptic placement for archetypal criticism…

  14. Accurate Cross Sections for Microanalysis

    PubMed Central

    Rez, Peter

    2002-01-01

    To calculate the intensity of x-ray emission in electron beam microanalysis requires a knowledge of the energy distribution of the electrons in the solid, the energy variation of the ionization cross section of the relevant subshell, the fraction of ionizations events producing x rays of interest and the absorption coefficient of the x rays on the path to the detector. The theoretical predictions and experimental data available for ionization cross sections are limited mainly to K shells of a few elements. Results of systematic plane wave Born approximation calculations with exchange for K, L, and M shell ionization cross sections over the range of electron energies used in microanalysis are presented. Comparisons are made with experimental measurement for selected K shells and it is shown that the plane wave theory is not appropriate for overvoltages less than 2.5 V. PMID:27446747

  15. Predicting Gang Fight Participation in a General Youth Sample via the HEW Youth Development Model's Community Program Impact Scales, Age, and Sex.

    ERIC Educational Resources Information Center

    Truckenmiller, James L.

    The accurate prediction of violence has been in the spotlight of critical concern in recent years. To investigate the relative predictive power of peer pressure, youth perceived negative labeling, youth perceived access to educational and occupational roles, social alienation, self-esteem, sex, and age with regard to gang fight participation…

  16. Critical Thinking Disposition and Skills in Dental Students: Development and Relationship to Academic Outcomes.

    PubMed

    Whitney, Eli M; Aleksejuniene, Jolanta; Walton, Joanne N

    2016-08-01

    Critical thinking is a key element of complex problem-solving and professional behavior. An ideal critical thinking measurement instrument would be able to accurately predict which dental students are predisposed to and capable of thinking critically and applying such thinking skills to clinical situations. The aims of this study were to describe critical thinking disposition and skills in dental students at the beginning and end of their first year, examine cohort and gender effects, and compare their critical thinking test scores to their first-year grades. Volunteers from three student cohorts at the University of British Columbia were tested using the California Critical Thinking Disposition Inventory and California Critical Thinking Skills instruments at the beginning and end of their first year. Based on the preliminary findings, one cohort was retested at graduation when their final-year grades and clinical advisor rankings were compared to their critical thinking test scores. The results showed that students who entered dental school with higher critical thinking scores tended to complete their first year with higher critical thinking scores, achieve higher grades, and show greater disposition to think critically at the start of the program. Students who demonstrated an ability to think critically and had a disposition to do so at the start of the program were also likely to demonstrate those same attributes at the completion of their training. High critical thinking scores were associated with success in both didactic and clinical settings in dental school.

  17. Critical Thinking Disposition and Skills in Dental Students: Development and Relationship to Academic Outcomes.

    PubMed

    Whitney, Eli M; Aleksejuniene, Jolanta; Walton, Joanne N

    2016-08-01

    Critical thinking is a key element of complex problem-solving and professional behavior. An ideal critical thinking measurement instrument would be able to accurately predict which dental students are predisposed to and capable of thinking critically and applying such thinking skills to clinical situations. The aims of this study were to describe critical thinking disposition and skills in dental students at the beginning and end of their first year, examine cohort and gender effects, and compare their critical thinking test scores to their first-year grades. Volunteers from three student cohorts at the University of British Columbia were tested using the California Critical Thinking Disposition Inventory and California Critical Thinking Skills instruments at the beginning and end of their first year. Based on the preliminary findings, one cohort was retested at graduation when their final-year grades and clinical advisor rankings were compared to their critical thinking test scores. The results showed that students who entered dental school with higher critical thinking scores tended to complete their first year with higher critical thinking scores, achieve higher grades, and show greater disposition to think critically at the start of the program. Students who demonstrated an ability to think critically and had a disposition to do so at the start of the program were also likely to demonstrate those same attributes at the completion of their training. High critical thinking scores were associated with success in both didactic and clinical settings in dental school. PMID:27480706

  18. Are External Knee Load and EMG Measures Accurate Indicators of Internal Knee Contact Forces during Gait?

    PubMed Central

    Meyer, Andrew J.; D'Lima, Darryl D.; Besier, Thor F.; Lloyd, David G.; Colwell, Clifford W.; Fregly, Benjamin J.

    2013-01-01

    Mechanical loading is believed to be a critical factor in the development and treatment of knee osteoarthritis. However, the contact forces to which the knee articular surfaces are subjected during daily activities cannot be measured clinically. Thus, the ability to predict internal knee contact forces accurately using external measures (i.e., external knee loads and muscle EMG signals) would be clinically valuable. This study quantifies how well external knee load and EMG measures predict internal knee contact forces during gait. A single subject with a force-measuring tibial prosthesis and post-operative valgus alignment performed four gait patterns (normal, medial thrust, walking pole, and trunk sway) to induce a wide range of external and internal knee joint loads. Linear regression analyses were performed to assess how much of the variability in internal contact forces was accounted for by variability in the external measures. Though the different gait patterns successfully induced significant changes in the external and internal quantities, changes in external measures were generally weak indicators of changes in total, medial, and lateral contact force. Our results suggest that when total contact force may be changing, caution should be exercised when inferring changes in knee contact forces based on observed changes in external knee load and EMG measures. Advances in musculoskeletal modeling methods may be needed for accurate estimation of in vivo knee contact forces. PMID:23280647

  19. Predicting violence in romantic relationships during adolescence and emerging adulthood: a critical review of the mechanisms by which familial and peer influences operate.

    PubMed

    Olsen, James P; Parra, Gilbert R; Bennett, Shira A

    2010-06-01

    For three decades, researchers have sought to gain a greater understanding of the developmental antecedents to later perpetration or victimization of violence in romantic relationships. Whereas the majority of early studies focused on family-of-origin factors, attention in recent years has turned to additional ecologies such as peer relationships. This review highlights accomplishments of both family and peer studies that focus on violent romantic relationships in an effort to summarize the current state of knowledge. Attention is given to epidemiology and developmental family and peer factors, with special attention given to mechanisms that mediate and/or moderate the relation between family and peer factors and later participation in violent relationships. A critical approach is taken throughout the review in order to identify limitations of previous studies, and to highlight key findings. A case is made for viewing these developmental antecedents as a result of multiple developmental ecologies that is perhaps best summarized as a culture of violence.

  20. Linear combination of one-step predictive information with an external reward in an episodic policy gradient setting: a critical analysis.

    PubMed

    Zahedi, Keyan; Martius, Georg; Ay, Nihat

    2013-01-01

    One of the main challenges in the field of embodied artificial intelligence is the open-ended autonomous learning of complex behaviors. Our approach is to use task-independent, information-driven intrinsic motivation(s) to support task-dependent learning. The work presented here is a preliminary step in which we investigate the predictive information (the mutual information of the past and future of the sensor stream) as an intrinsic drive, ideally supporting any kind of task acquisition. Previous experiments have shown that the predictive information (PI) is a good candidate to support autonomous, open-ended learning of complex behaviors, because a maximization of the PI corresponds to an exploration of morphology- and environment-dependent behavioral regularities. The idea is that these regularities can then be exploited in order to solve any given task. Three different experiments are presented and their results lead to the conclusion that the linear combination of the one-step PI with an external reward function is not generally recommended in an episodic policy gradient setting. Only for hard tasks a great speed-up can be achieved at the cost of an asymptotic performance lost.

  1. The importance of accurate atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Payne, Dylan; Schroeder, John; Liang, Pang

    2014-11-01

    This paper will focus on the effect of atmospheric conditions on EO sensor performance using computer models. We have shown the importance of accurately modeling atmospheric effects for predicting the performance of an EO sensor. A simple example will demonstrated how real conditions for several sites in China will significantly impact on image correction, hyperspectral imaging, and remote sensing. The current state-of-the-art model for computing atmospheric transmission and radiance is, MODTRAN® 5, developed by the US Air Force Research Laboratory and Spectral Science, Inc. Research by the US Air Force, Navy and Army resulted in the public release of LOWTRAN 2 in the early 1970's. Subsequent releases of LOWTRAN and MODTRAN® have continued until the present. Please verify that (1) all pages are present, (2) all figures are correct, (3) all fonts and special characters are correct, and (4) all text and figures fit within the red margin lines shown on this review document. Complete formatting information is available at http://SPIE.org/manuscripts Return to the Manage Active Submissions page at http://spie.org/submissions/tasks.aspx and approve or disapprove this submission. Your manuscript will not be published without this approval. Please contact author_help@spie.org with any questions or concerns. The paper will demonstrate the importance of using validated models and local measured meteorological, atmospheric and aerosol conditions to accurately simulate the atmospheric transmission and radiance. Frequently default conditions are used which can produce errors of as much as 75% in these values. This can have significant impact on remote sensing applications.

  2. Appalachian Thinner: Sensitivity of cost predictions to site factors

    SciTech Connect

    Wilson, G.E.; White, D.E.

    1983-12-01

    The importance of cost analysis for logging equipment warrants the evaluation of the influence prediction variables have on potential profitability. Logging equipment cost analysis is subject to high variance, but a higher degree of accuracy is obtained if profitability predictions consider site factors. This paper analyzes the influence of several site factors on logging cost predictions formulated for a small prototype cable yarder called the Appalachian Thinner. Identification of critical parameters makes possible an accurate evaluation of production rates and establishes possible machine inefficiences that may be modified so as to increase overall profitability.

  3. Critical Thinking vs. Critical Consciousness

    ERIC Educational Resources Information Center

    Doughty, Howard A.

    2006-01-01

    This article explores four kinds of critical thinking. The first is found in Socratic dialogues, which employ critical thinking mainly to reveal logical fallacies in common opinions, thus cleansing superior minds of error and leaving philosophers free to contemplate universal verities. The second is critical interpretation (hermeneutics) which…

  4. Critically Thinking about Critical Thinking

    ERIC Educational Resources Information Center

    Weissberg, Robert

    2013-01-01

    In this article, the author states that "critical thinking" has mesmerized academics across the political spectrum and that even high school students are now being called upon to "think critically." He furthers adds that it is no exaggeration to say that "critical thinking" has quickly evolved into a scholarly…

  5. How Critical Is Critical Thinking?

    ERIC Educational Resources Information Center

    Shaw, Ryan D.

    2014-01-01

    Recent educational discourse is full of references to the value of critical thinking as a 21st-century skill. In music education, critical thinking has been discussed in relation to problem solving and music listening, and some researchers suggest that training in critical thinking can improve students' responses to music. But what exactly is…

  6. Rapid and accurate determination of tissue optical properties using least-squares support vector machines

    PubMed Central

    Barman, Ishan; Dingari, Narahara Chari; Rajaram, Narasimhan; Tunnell, James W.; Dasari, Ramachandra R.; Feld, Michael S.

    2011-01-01

    Diffuse reflectance spectroscopy (DRS) has been extensively applied for the characterization of biological tissue, especially for dysplasia and cancer detection, by determination of the tissue optical properties. A major challenge in performing routine clinical diagnosis lies in the extraction of the relevant parameters, especially at high absorption levels typically observed in cancerous tissue. Here, we present a new least-squares support vector machine (LS-SVM) based regression algorithm for rapid and accurate determination of the absorption and scattering properties. Using physical tissue models, we demonstrate that the proposed method can be implemented more than two orders of magnitude faster than the state-of-the-art approaches while providing better prediction accuracy. Our results show that the proposed regression method has great potential for clinical applications including in tissue scanners for cancer margin assessment, where rapid quantification of optical properties is critical to the performance. PMID:21412464

  7. Rapid and accurate determination of tissue optical properties using least-squares support vector machines.

    PubMed

    Barman, Ishan; Dingari, Narahara Chari; Rajaram, Narasimhan; Tunnell, James W; Dasari, Ramachandra R; Feld, Michael S

    2011-01-01

    Diffuse reflectance spectroscopy (DRS) has been extensively applied for the characterization of biological tissue, especially for dysplasia and cancer detection, by determination of the tissue optical properties. A major challenge in performing routine clinical diagnosis lies in the extraction of the relevant parameters, especially at high absorption levels typically observed in cancerous tissue. Here, we present a new least-squares support vector machine (LS-SVM) based regression algorithm for rapid and accurate determination of the absorption and scattering properties. Using physical tissue models, we demonstrate that the proposed method can be implemented more than two orders of magnitude faster than the state-of-the-art approaches while providing better prediction accuracy. Our results show that the proposed regression method has great potential for clinical applications including in tissue scanners for cancer margin assessment, where rapid quantification of optical properties is critical to the performance. PMID:21412464

  8. Anomalous critical fields in quantum critical superconductors.

    PubMed

    Putzke, C; Walmsley, P; Fletcher, J D; Malone, L; Vignolles, D; Proust, C; Badoux, S; See, P; Beere, H E; Ritchie, D A; Kasahara, S; Mizukami, Y; Shibauchi, T; Matsuda, Y; Carrington, A

    2014-01-01

    Fluctuations around an antiferromagnetic quantum critical point (QCP) are believed to lead to unconventional superconductivity and in some cases to high-temperature superconductivity. However, the exact mechanism by which this occurs remains poorly understood. The iron-pnictide superconductor BaFe2(As(1-x)P(x))2 is perhaps the clearest example to date of a high-temperature quantum critical superconductor, and so it is a particularly suitable system to study how the quantum critical fluctuations affect the superconducting state. Here we show that the proximity of the QCP yields unexpected anomalies in the superconducting critical fields. We find that both the lower and upper critical fields do not follow the behaviour, predicted by conventional theory, resulting from the observed mass enhancement near the QCP. Our results imply that the energy of superconducting vortices is enhanced, possibly due to a microscopic mixing of antiferromagnetism and superconductivity, suggesting that a highly unusual vortex state is realized in quantum critical superconductors. PMID:25477044

  9. Anomalous critical fields in quantum critical superconductors

    PubMed Central

    Putzke, C.; Walmsley, P.; Fletcher, J. D.; Malone, L.; Vignolles, D.; Proust, C.; Badoux, S.; See, P.; Beere, H. E.; Ritchie, D. A.; Kasahara, S.; Mizukami, Y.; Shibauchi, T.; Matsuda, Y.; Carrington, A.

    2014-01-01

    Fluctuations around an antiferromagnetic quantum critical point (QCP) are believed to lead to unconventional superconductivity and in some cases to high-temperature superconductivity. However, the exact mechanism by which this occurs remains poorly understood. The iron-pnictide superconductor BaFe2(As1−xPx)2 is perhaps the clearest example to date of a high-temperature quantum critical superconductor, and so it is a particularly suitable system to study how the quantum critical fluctuations affect the superconducting state. Here we show that the proximity of the QCP yields unexpected anomalies in the superconducting critical fields. We find that both the lower and upper critical fields do not follow the behaviour, predicted by conventional theory, resulting from the observed mass enhancement near the QCP. Our results imply that the energy of superconducting vortices is enhanced, possibly due to a microscopic mixing of antiferromagnetism and superconductivity, suggesting that a highly unusual vortex state is realized in quantum critical superconductors. PMID:25477044

  10. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  11. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  12. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  13. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  14. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  15. Strategy Guideline. Accurate Heating and Cooling Load Calculations

    SciTech Connect

    Burdick, Arlan

    2011-06-01

    This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.

  16. Strategy Guideline: Accurate Heating and Cooling Load Calculations

    SciTech Connect

    Burdick, A.

    2011-06-01

    This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.

  17. Criticality Model

    SciTech Connect

    A. Alsaed

    2004-09-14

    The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of this analysis is to document the criticality computational method. The criticality

  18. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  19. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  20. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  1. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  2. Accurate adiabatic correction in the hydrogen molecule.

    PubMed

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10(-12) at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10(-7) cm(-1), which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels. PMID:25494728

  3. Optimal speed estimation in natural image movies predicts human performance.

    PubMed

    Burge, Johannes; Geisler, Wilson S

    2015-01-01

    Accurate perception of motion depends critically on accurate estimation of retinal motion speed. Here we first analyse natural image movies to determine the optimal space-time receptive fields (RFs) for encoding local motion speed in a particular direction, given the constraints of the early visual system. Next, from the RF responses to natural stimuli, we determine the neural computations that are optimal for combining and decoding the responses into estimates of speed. The computations show how selective, invariant speed-tuned units might be constructed by the nervous system. Then, in a psychophysical experiment using matched stimuli, we show that human performance is nearly optimal. Indeed, a single efficiency parameter accurately predicts the detailed shapes of a large set of human psychometric functions. We conclude that many properties of speed-selective neurons and human speed discrimination performance are predicted by the optimal computations, and that natural stimulus variation affects optimal and human observers almost identically.

  4. Critical Muralism

    ERIC Educational Resources Information Center

    Rosette, Arturo

    2009-01-01

    This study focuses on the development and practices of Critical Muralists--community-educator-artist-leader-activists--and situates these specifically in relation to the Mexican mural tradition of los Tres Grandes and in relation to the history of public art more generally. The study examines how Critical Muralists address artistic and…

  5. Clinical Usefulness of Urinary Fatty Acid Binding Proteins in Assessing the Severity and Predicting Treatment Response of Pneumonia in Critically Ill Patients

    PubMed Central

    Tsao, Tsung-Cheng; Tsai, Han-Chen; Chang, Shi-Chuan

    2016-01-01

    assessing the pneumonia severity and in predicting treatment response, respectively. Further studies with larger populations are needed to verify these issues. PMID:27175705

  6. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  7. Approaches for the accurate definition of geological time boundaries

    NASA Astrophysics Data System (ADS)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  8. Using In-Service and Coaching to Increase Teachers' Accurate Use of Research-Based Strategies

    ERIC Educational Resources Information Center

    Kretlow, Allison G.; Cooke, Nancy L.; Wood, Charles L.

    2012-01-01

    Increasing the accurate use of research-based practices in classrooms is a critical issue. Professional development is one of the most practical ways to provide practicing teachers with training related to research-based practices. This study examined the effects of in-service plus follow-up coaching on first grade teachers' accurate delivery of…

  9. Critics and Criticism of Education

    ERIC Educational Resources Information Center

    Ornstein, Allan C.

    1977-01-01

    Radical educational critics, such as Edgar Friedenberg, Paul Goodman, A. S. Neill, John Holt, Jonathan Kozol, Herbert Kohl, James Herndon, and Ivan Illich, have few constructive goals, no strategy for broad change, and a disdain for modernization and compromise. Additionally, these critics, says the author, fail to consider social factors related…

  10. Clarifying types of uncertainty: when are models accurate, and uncertainties small?

    PubMed

    Cox, Louis Anthony Tony

    2011-10-01

    Professor Aven has recently noted the importance of clarifying the meaning of terms such as "scientific uncertainty" for use in risk management and policy decisions, such as when to trigger application of the precautionary principle. This comment examines some fundamental conceptual challenges for efforts to define "accurate" models and "small" input uncertainties by showing that increasing uncertainty in model inputs may reduce uncertainty in model outputs; that even correct models with "small" input uncertainties need not yield accurate or useful predictions for quantities of interest in risk management (such as the duration of an epidemic); and that accurate predictive models need not be accurate causal models.

  11. Critical parameters of hard-core Yukawa fluids within the structural theory

    NASA Astrophysics Data System (ADS)

    Bahaa Khedr, M.; Osman, S. M.

    2012-10-01

    A purely statistical mechanical approach is proposed to account for the liquid-vapor critical point based on the mean density approximation (MDA) of the direct correlation function. The application to hard-core Yukawa (HCY) fluids facilitates the use of the series mean spherical approximation (SMSA). The location of the critical parameters for HCY fluid with variable intermolecular range is accurately calculated. Good agreement is observed with computer simulation results and with the inverse temperature expansion (ITE) predictions. The influence of the potential range on the critical parameters is demonstrated and the universality of the critical compressibility ratio is discussed. The behavior of the isochoric and isobaric heat capacities along the equilibrium line and the near vicinity of the critical point is discussed in details.

  12. Measurement of Critical Contact Angle in a Microgravity Space Experiment

    NASA Technical Reports Server (NTRS)

    Concus, P.; Finn, R.; Weislogel, M.

    1998-01-01

    Mathematical theory predicts that small changes in container shape or in contact angle can give rise to large shifts of liquid in a microgravity environment. This phenomenon was investigated in the Interface Configuration Experiment on board the USMT,2 Space Shuttle flight. The experiment's "double proboscis" containers were designed to strike a balance between conflicting requirements of sizable volume of liquid shift (for ease of observation) and abruptness of the shift (for accurate determination of critical contact angle). The experimental results support the classical concept of macroscopic contact angle and demonstrate the role of hysteresis in impeding orientation toward equilibrium.

  13. Measurement of Critical Contact Angle in a Microgravity Space Experiment

    NASA Technical Reports Server (NTRS)

    Concus, P.; Finn, R.; Weislogel, M.

    1998-01-01

    Mathematical theory predicts that small changes in container shape or in contact angle can give rise to large shifts of liquid in a microgravity environment. This phenomenon was investigated in the Interface Configuration Experiment on board the USML-2 Space Shuttle flight. The experiment's "double proboscis" containers were designed to strike a balance between conflicting requirements of sizable volume of liquid shift (for ease of observation) and abruptness of the shift (for accurate determination of critical contact angle). The experimental results support the classical concept of macroscopic contact angle and demonstrate the role of hysteresis in impeding orientation toward equilibrium.

  14. Measurement of critical contact angle in a microgravity space experiment

    NASA Astrophysics Data System (ADS)

    Concus, P.; Finn, R.; Weislogel, M.

    Mathematical theory predicts that small changes in container shape or in contact angle can give rise to large shifts of liquid in a microgravity environment. This phenomenon was investigated in the Interface Configuration Experiment on board the NASA USML-2 Space Shuttle flight. The experiment's ``double proboscis'' containers were designed to strike a balance between conflicting requirements of sizable volume of liquid shift (for ease of observation) and abruptness of the shift (for accurate determination of critical contact angle). The experimental results support the classical concept of macroscopic contact angle and demonstrate the role of hysteresis in impeding orientation toward equilibrium.

  15. Measurement of critical contact angle in a microgravity space experiment

    SciTech Connect

    Concus, P.; Finn, R.; Weislogel, M.

    1999-06-01

    Mathematical theory predicts that small changes in container shape or in contact angle can give rise to large shifts of liquid in a microgravity environment. This phenomenon was investigated in the Interface Configuration Experiment on board the NASA USML-2 Space Shuttle flight. The experiment's double proboscis containers were designed to strike a balance between conflicting requirements of sizable volume of liquid shift (for ease of observation) and abruptness of the shift (for accurate determination of critical contact angle). The experimental results support the classical concept of macroscopic contact angle and demonstrate the role of hysteresis in impeding orientation toward equilibrium.

  16. Improving the Resilience of Major Ports and Critical Supply Chains to Extreme Coastal Flooding: a Combined Artificial Neural Network and Hydrodynamic Simulation Approach to Predicting Tidal Surge Inundation of Port Infrastructure and Impact on Operations.

    NASA Astrophysics Data System (ADS)

    French, J.

    2015-12-01

    Ports are vital to the global economy, but assessments of global exposure to flood risk have generally focused on major concentrations of population or asset values. Few studies have examined the impact of extreme inundation events on port operation and critical supply chains. Extreme water levels and recurrence intervals have conventionally been estimated via analysis of historic water level maxima, and these vary widely depending on the statistical assumptions made. This information is supplemented by near-term forecasts from operational surge-tide models, which give continuous water levels but at considerable computational cost. As part of a NE