ReactPRED: a tool to predict and analyze biochemical reactions.
Sivakumar, Tadi Venkata; Giri, Varun; Park, Jin Hwan; Kim, Tae Yong; Bhaduri, Anirban
2016-11-15
Biochemical pathways engineering is often used to synthesize or degrade target chemicals. In silico screening of the biochemical transformation space allows predicting feasible reactions, constituting these pathways. Current enabling tools are customized to predict reactions based on pre-defined biochemical transformations or reaction rule sets. Reaction rule sets are usually curated manually and tailored to specific applications. They are not exhaustive. In addition, current systems are incapable of regulating and refining data with an aim to tune specificity and sensitivity. A robust and flexible tool that allows automated reaction rule set creation along with regulated pathway prediction and analyses is a need. ReactPRED aims to address the same. ReactPRED is an open source flexible and customizable tool enabling users to predict biochemical reactions and pathways. The tool allows automated reaction rule creation from a user defined reaction set. Additionally, reaction rule degree and rule tolerance features allow refinement of predicted data. It is available as a flexible graphical user interface and a console application. ReactPRED is available at: https://sourceforge.net/projects/reactpred/ CONTACT: anirban.b@samsung.com or ty76.kim@samsung.comSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Wieske, Luuk; Witteveen, Esther; Verhamme, Camiel; Dettling-Ihnenfeldt, Daniela S; van der Schaaf, Marike; Schultz, Marcus J; van Schaik, Ivo N; Horn, Janneke
2014-01-01
An early diagnosis of Intensive Care Unit-acquired weakness (ICU-AW) using muscle strength assessment is not possible in most critically ill patients. We hypothesized that development of ICU-AW can be predicted reliably two days after ICU admission, using patient characteristics, early available clinical parameters, laboratory results and use of medication as parameters. Newly admitted ICU patients mechanically ventilated ≥2 days were included in this prospective observational cohort study. Manual muscle strength was measured according to the Medical Research Council (MRC) scale, when patients were awake and attentive. ICU-AW was defined as an average MRC score <4. A prediction model was developed by selecting predictors from an a-priori defined set of candidate predictors, based on known risk factors. Discriminative performance of the prediction model was evaluated, validated internally and compared to the APACHE IV and SOFA score. Of 212 included patients, 103 developed ICU-AW. Highest lactate levels, treatment with any aminoglycoside in the first two days after admission and age were selected as predictors. The area under the receiver operating characteristic curve of the prediction model was 0.71 after internal validation. The new prediction model improved discrimination compared to the APACHE IV and the SOFA score. The new early prediction model for ICU-AW using a set of 3 easily available parameters has fair discriminative performance. This model needs external validation.
IDENTIFICATION AND PREDICTION OF FISH ASSEMBLAGES IN STREAMS OF THE MID-ATLANTIC HIGHLANDS, USA
Managing aquatic resources requires meaningful assessment endpoints on which to base decisions. In freshwater streams, assessment endpoints are often defined as fish communities. Given limited resources available for environmental monitoring, having a means of predicting fish a...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behling, H.; Behling, K.; Amarasooriya, H.
1995-02-01
A generic difficulty encountered in cost-benefit analyses is the quantification of major elements that define the costs and the benefits in commensurate units. In this study, the costs of making KI available for public use, and the avoidance of thyroidal health effects predicted to be realized from the availability of that KI (i.e., the benefits), are defined in the commensurate units of dollars.
Availability Analysis of Dual Mode Systems
DOT National Transportation Integrated Search
1974-04-01
The analytical procedures presented define a method of evaluating the effects of failures in a complex dual-mode system based on a worst case steady-state analysis. The computed result is an availability figure of merit and not an absolute prediction...
Defining and predicting structurally conserved regions in protein superfamilies
Huang, Ivan K.; Grishin, Nick V.
2013-01-01
Motivation: The structures of homologous proteins are generally better conserved than their sequences. This phenomenon is demonstrated by the prevalence of structurally conserved regions (SCRs) even in highly divergent protein families. Defining SCRs requires the comparison of two or more homologous structures and is affected by their availability and divergence, and our ability to deduce structurally equivalent positions among them. In the absence of multiple homologous structures, it is necessary to predict SCRs of a protein using information from only a set of homologous sequences and (if available) a single structure. Accurate SCR predictions can benefit homology modelling and sequence alignment. Results: Using pairwise DaliLite alignments among a set of homologous structures, we devised a simple measure of structural conservation, termed structural conservation index (SCI). SCI was used to distinguish SCRs from non-SCRs. A database of SCRs was compiled from 386 SCOP superfamilies containing 6489 protein domains. Artificial neural networks were then trained to predict SCRs with various features deduced from a single structure and homologous sequences. Assessment of the predictions via a 5-fold cross-validation method revealed that predictions based on features derived from a single structure perform similarly to ones based on homologous sequences, while combining sequence and structural features was optimal in terms of accuracy (0.755) and Matthews correlation coefficient (0.476). These results suggest that even without information from multiple structures, it is still possible to effectively predict SCRs for a protein. Finally, inspection of the structures with the worst predictions pinpoints difficulties in SCR definitions. Availability: The SCR database and the prediction server can be found at http://prodata.swmed.edu/SCR. Contact: 91huangi@gmail.com or grishin@chop.swmed.edu Supplementary information: Supplementary data are available at Bioinformatics Online PMID:23193223
Toward a Predictive Model of Community College Student Success in Blended Classes
ERIC Educational Resources Information Center
Volchok, Edward
2018-01-01
This retrospective study evaluates early semester predictors of whether or not community college students will successfully complete blended or hybrid courses. These predictors are available to faculty by the fourth week of the semester. Success is defined as receiving a grade of C- or higher. Failure is defined as a grade below a C- or a…
Dual nozzle aerodynamic and cooling analysis study
NASA Technical Reports Server (NTRS)
Meagher, G. M.
1981-01-01
Analytical models to predict performance and operating characteristics of dual nozzle concepts were developed and improved. Aerodynamic models are available to define flow characteristics and bleed requirements for both the dual throat and dual expander concepts. Advanced analytical techniques were utilized to provide quantitative estimates of the bleed flow, boundary layer, and shock effects within dual nozzle engines. Thermal analyses were performed to define cooling requirements for baseline configurations, and special studies of unique dual nozzle cooling problems defined feasible means of achieving adequate cooling.
Philpott, H; Nandurkar, S; Royce, S G; Thien, F; Gibson, P R
2016-08-01
The use of allergy tests to guide dietary treatment for eosinophilic oesophagitis (EoE) is controversial and data are limited. Aeroallergen sensitisation patterns and food triggers have been defined in Northern Hemisphere cohorts only. To determine if allergy tests that are routinely available can predict food triggers in adult patients with EoE. To define the food triggers and aeroallergen sensitisation patterns in a novel Southern Hemisphere (Australian) cohort of patients. Consecutive patients with EoE who elected to undergo dietary therapy were prospectively assessed, demographic details and atopic characteristics recorded, and allergy tests, comprising skin-prick and skin-patch tests, serum allergen-specific IgE, basophil activation test and serum food-specific IgG, were performed. Patients underwent a six-food elimination diet with a structured algorithm that included endoscopic and histological examination of the oesophagus a minimum of 2 weeks after each challenge. Response was defined as <15 eosinophils per HPF. Foods defined as triggers were considered as gold standard and were compared with those identified by allergy testing. No allergy test could accurately predict actual food triggers. Concordance among skin-prick and serum allergen-specific IgE was high for aeroallergens only. Among seasonal aeroallergens, rye-grass sensitisation was predominant. Food triggers were commonly wheat, milk and egg, alone or in combination. None of the currently-available allergy tests predicts food triggers for EoE. Exclusion-rechallenge methodology with oesophageal histological assessment remains the only effective investigation. The same food triggers were identified in this southern hemisphere cohort as previously described. © 2016 John Wiley & Sons Ltd.
Peach, Megan L; Zakharov, Alexey V; Liu, Ruifeng; Pugliese, Angelo; Tawa, Gregory; Wallqvist, Anders; Nicklaus, Marc C
2012-10-01
Metabolism has been identified as a defining factor in drug development success or failure because of its impact on many aspects of drug pharmacology, including bioavailability, half-life and toxicity. In this article, we provide an outline and descriptions of the resources for metabolism-related property predictions that are currently either freely or commercially available to the public. These resources include databases with data on, and software for prediction of, several end points: metabolite formation, sites of metabolic transformation, binding to metabolizing enzymes and metabolic stability. We attempt to place each tool in historical context and describe, wherever possible, the data it was based on. For predictions of interactions with metabolizing enzymes, we show a typical set of results for a small test set of compounds. Our aim is to give a clear overview of the areas and aspects of metabolism prediction in which the currently available resources are useful and accurate, and the areas in which they are inadequate or missing entirely.
A review of propeller noise prediction methodology: 1919-1994
NASA Technical Reports Server (NTRS)
Metzger, F. Bruce
1995-01-01
This report summarizes a review of the literature regarding propeller noise prediction methods. The review is divided into six sections: (1) early methods; (2) more recent methods based on earlier theory; (3) more recent methods based on the Acoustic Analogy; (4) more recent methods based on Computational Acoustics; (5) empirical methods; and (6) broadband methods. The report concludes that there are a large number of noise prediction procedures available which vary markedly in complexity. Deficiencies in accuracy of methods in many cases may be related, not to the methods themselves, but the accuracy and detail of the aerodynamic inputs used to calculate noise. The steps recommended in the report to provide accurate and easy to use prediction methods are: (1) identify reliable test data; (2) define and conduct test programs to fill gaps in the existing data base; (3) identify the most promising prediction methods; (4) evaluate promising prediction methods relative to the data base; (5) identify and correct the weaknesses in the prediction methods, including lack of user friendliness, and include features now available only in research codes; (6) confirm the accuracy of improved prediction methods to the data base; and (7) make the methods widely available and provide training in their use.
Questioning the Faith - Models and Prediction in Stream Restoration (Invited)
NASA Astrophysics Data System (ADS)
Wilcock, P.
2013-12-01
River management and restoration demand prediction at and beyond our present ability. Management questions, framed appropriately, can motivate fundamental advances in science, although the connection between research and application is not always easy, useful, or robust. Why is that? This presentation considers the connection between models and management, a connection that requires critical and creative thought on both sides. Essential challenges for managers include clearly defining project objectives and accommodating uncertainty in any model prediction. Essential challenges for the research community include matching the appropriate model to project duration, space, funding, information, and social constraints and clearly presenting answers that are actually useful to managers. Better models do not lead to better management decisions or better designs if the predictions are not relevant to and accepted by managers. In fact, any prediction may be irrelevant if the need for prediction is not recognized. The predictive target must be developed in an active dialog between managers and modelers. This relationship, like any other, can take time to develop. For example, large segments of stream restoration practice have remained resistant to models and prediction because the foundational tenet - that channels built to a certain template will be able to transport the supplied sediment with the available flow - has no essential physical connection between cause and effect. Stream restoration practice can be steered in a predictive direction in which project objectives are defined as predictable attributes and testable hypotheses. If stream restoration design is defined in terms of the desired performance of the channel (static or dynamic, sediment surplus or deficit), then channel properties that provide these attributes can be predicted and a basis exists for testing approximations, models, and predictions.
Staley, Dennis; Kean, Jason W.; Cannon, Susan H.; Schmidt, Kevin M.; Laber, Jayme L.
2012-01-01
Rainfall intensity–duration (ID) thresholds are commonly used to predict the temporal occurrence of debris flows and shallow landslides. Typically, thresholds are subjectively defined as the upper limit of peak rainstorm intensities that do not produce debris flows and landslides, or as the lower limit of peak rainstorm intensities that initiate debris flows and landslides. In addition, peak rainstorm intensities are often used to define thresholds, as data regarding the precise timing of debris flows and associated rainfall intensities are usually not available, and rainfall characteristics are often estimated from distant gauging locations. Here, we attempt to improve the performance of existing threshold-based predictions of post-fire debris-flow occurrence by utilizing data on the precise timing of debris flows relative to rainfall intensity, and develop an objective method to define the threshold intensities. We objectively defined the thresholds by maximizing the number of correct predictions of debris flow occurrence while minimizing the rate of both Type I (false positive) and Type II (false negative) errors. We identified that (1) there were statistically significant differences between peak storm and triggering intensities, (2) the objectively defined threshold model presents a better balance between predictive success, false alarms and failed alarms than previous subjectively defined thresholds, (3) thresholds based on measurements of rainfall intensity over shorter duration (≤60 min) are better predictors of post-fire debris-flow initiation than longer duration thresholds, and (4) the objectively defined thresholds were exceeded prior to the recorded time of debris flow at frequencies similar to or better than subjective thresholds. Our findings highlight the need to better constrain the timing and processes of initiation of landslides and debris flows for future threshold studies. In addition, the methods used to define rainfall thresholds in this study represent a computationally simple means of deriving critical values for other studies of nonlinear phenomena characterized by thresholds.
Peach, Megan L; Zakharov, Alexey V; Liu, Ruifeng; Pugliese, Angelo; Tawa, Gregory; Wallqvist, Anders; Nicklaus, Marc C
2014-01-01
Metabolism has been identified as a defining factor in drug development success or failure because of its impact on many aspects of drug pharmacology, including bioavailability, half-life and toxicity. In this article, we provide an outline and descriptions of the resources for metabolism-related property predictions that are currently either freely or commercially available to the public. These resources include databases with data on, and software for prediction of, several end points: metabolite formation, sites of metabolic transformation, binding to metabolizing enzymes and metabolic stability. We attempt to place each tool in historical context and describe, wherever possible, the data it was based on. For predictions of interactions with metabolizing enzymes, we show a typical set of results for a small test set of compounds. Our aim is to give a clear overview of the areas and aspects of metabolism prediction in which the currently available resources are useful and accurate, and the areas in which they are inadequate or missing entirely. PMID:23088273
Review of Factors, Methods, and Outcome Definition in Designing Opioid Abuse Predictive Models.
Alzeer, Abdullah H; Jones, Josette; Bair, Matthew J
2018-05-01
Several opioid risk assessment tools are available to prescribers to evaluate opioid analgesic abuse among chronic patients. The objectives of this study are to 1) identify variables available in the literature to predict opioid abuse; 2) explore and compare methods (population, database, and analysis) used to develop statistical models that predict opioid abuse; and 3) understand how outcomes were defined in each statistical model predicting opioid abuse. The OVID database was searched for this study. The search was limited to articles written in English and published from January 1990 to April 2016. This search generated 1,409 articles. Only seven studies and nine models met our inclusion-exclusion criteria. We found nine models and identified 75 distinct variables. Three studies used administrative claims data, and four studies used electronic health record data. The majority, four out of seven articles (six out of nine models), were primarily dependent on the presence or absence of opioid abuse or dependence (ICD-9 diagnosis code) to define opioid abuse. However, two articles used a predefined list of opioid-related aberrant behaviors. We identified variables used to predict opioid abuse from electronic health records and administrative data. Medication variables are the recurrent variables in the articles reviewed (33 variables). Age and gender are the most consistent demographic variables in predicting opioid abuse. Overall, there is similarity in the sampling method and inclusion/exclusion criteria (age, number of prescriptions, follow-up period, and data analysis methods). Intuitive research to utilize unstructured data may increase opioid abuse models' accuracy.
GeneSilico protein structure prediction meta-server.
Kurowski, Michal A; Bujnicki, Janusz M
2003-07-01
Rigorous assessments of protein structure prediction have demonstrated that fold recognition methods can identify remote similarities between proteins when standard sequence search methods fail. It has been shown that the accuracy of predictions is improved when refined multiple sequence alignments are used instead of single sequences and if different methods are combined to generate a consensus model. There are several meta-servers available that integrate protein structure predictions performed by various methods, but they do not allow for submission of user-defined multiple sequence alignments and they seldom offer confidentiality of the results. We developed a novel WWW gateway for protein structure prediction, which combines the useful features of other meta-servers available, but with much greater flexibility of the input. The user may submit an amino acid sequence or a multiple sequence alignment to a set of methods for primary, secondary and tertiary structure prediction. Fold-recognition results (target-template alignments) are converted into full-atom 3D models and the quality of these models is uniformly assessed. A consensus between different FR methods is also inferred. The results are conveniently presented on-line on a single web page over a secure, password-protected connection. The GeneSilico protein structure prediction meta-server is freely available for academic users at http://genesilico.pl/meta.
GeneSilico protein structure prediction meta-server
Kurowski, Michal A.; Bujnicki, Janusz M.
2003-01-01
Rigorous assessments of protein structure prediction have demonstrated that fold recognition methods can identify remote similarities between proteins when standard sequence search methods fail. It has been shown that the accuracy of predictions is improved when refined multiple sequence alignments are used instead of single sequences and if different methods are combined to generate a consensus model. There are several meta-servers available that integrate protein structure predictions performed by various methods, but they do not allow for submission of user-defined multiple sequence alignments and they seldom offer confidentiality of the results. We developed a novel WWW gateway for protein structure prediction, which combines the useful features of other meta-servers available, but with much greater flexibility of the input. The user may submit an amino acid sequence or a multiple sequence alignment to a set of methods for primary, secondary and tertiary structure prediction. Fold-recognition results (target-template alignments) are converted into full-atom 3D models and the quality of these models is uniformly assessed. A consensus between different FR methods is also inferred. The results are conveniently presented on-line on a single web page over a secure, password-protected connection. The GeneSilico protein structure prediction meta-server is freely available for academic users at http://genesilico.pl/meta. PMID:12824313
NASA Astrophysics Data System (ADS)
Farmann, Alexander; Sauer, Dirk Uwe
2016-10-01
This study provides an overview of available techniques for on-board State-of-Available-Power (SoAP) prediction of lithium-ion batteries (LIBs) in electric vehicles. Different approaches dealing with the on-board estimation of battery State-of-Charge (SoC) or State-of-Health (SoH) have been extensively discussed in various researches in the past. However, the topic of SoAP prediction has not been explored comprehensively yet. The prediction of the maximum power that can be applied to the battery by discharging or charging it during acceleration, regenerative braking and gradient climbing is definitely one of the most challenging tasks of battery management systems. In large lithium-ion battery packs because of many factors, such as temperature distribution, cell-to-cell deviations regarding the actual battery impedance or capacity either in initial or aged state, the use of efficient and reliable methods for battery state estimation is required. The available battery power is limited by the safe operating area (SOA), where SOA is defined by battery temperature, current, voltage and SoC. Accurate SoAP prediction allows the energy management system to regulate the power flow of the vehicle more precisely and optimize battery performance and improve its lifetime accordingly. To this end, scientific and technical literature sources are studied and available approaches are reviewed.
Defining a predictive model of developmental toxicity from in vitro and high-throughput screening (HTS) assays can be limited by the availability of developmental defects data. ToxRefDB (www.epa.gov/ncct/todrefdb) was built from animal studies on data-rich environmental chemicals...
Brooker, Simon; Beasley, Michael; Ndinaromtan, Montanan; Madjiouroum, Ester Mobele; Baboguel, Marie; Djenguinabe, Elie; Hay, Simon I.; Bundy, Don A. P.
2002-01-01
OBJECTIVE: To design and implement a rapid and valid epidemiological assessment of helminths among schoolchildren in Chad using ecological zones defined by remote sensing satellite sensor data and to investigate the environmental limits of helminth distribution. METHODS: Remote sensing proxy environmental data were used to define seven ecological zones in Chad. These were combined with population data in a geographical information system (GIS) in order to define a sampling protocol. On this basis, 20 schools were surveyed. Multilevel analysis, by means of generalized estimating equations to account for clustering at the school level, was used to investigate the relationship between infection patterns and key environmental variables. FINDINGS: In a sample of 1023 schoolchildren, 22.5% were infected with Schistosoma haematobium and 32.7% with hookworm. None were infected with Ascaris lumbricoides or Trichuris trichiura. The prevalence of S. haematobium and hookworm showed marked geographical heterogeneity and the observed patterns showed a close association with the defined ecological zones and significant relationships with environmental variables. These results contribute towards defining the thermal limits of geohelminth species. Predictions of infection prevalence were made for each school surveyed with the aid of models previously developed for Cameroon. These models correctly predicted that A. lumbricoides and T. trichiura would not occur in Chad but the predictions for S. haematobium were less reliable at the school level. CONCLUSION: GIS and remote sensing can play an important part in the rapid planning of helminth control programmes where little information on disease burden is available. Remote sensing prediction models can indicate patterns of geohelminth infection but can only identify potential areas of high risk for S. haematobium. PMID:12471398
Demonstrating the improvement of predictive maturity of a computational model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hemez, Francois M; Unal, Cetin; Atamturktur, Huriye S
2010-01-01
We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smallermore » discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.« less
Lazzari, Barbara; Caprera, Andrea; Cestaro, Alessandro; Merelli, Ivan; Del Corvo, Marcello; Fontana, Paolo; Milanesi, Luciano; Velasco, Riccardo; Stella, Alessandra
2009-06-29
Two complete genome sequences are available for Vitis vinifera Pinot noir. Based on the sequence and gene predictions produced by the IASMA, we performed an in silico detection of putative microRNA genes and of their targets, and collected the most reliable microRNA predictions in a web database. The application is available at http://www.itb.cnr.it/ptp/grapemirna/. The program FindMiRNA was used to detect putative microRNA genes in the grape genome. A very high number of predictions was retrieved, calling for validation. Nine parameters were calculated and, based on the grape microRNAs dataset available at miRBase, thresholds were defined and applied to FindMiRNA predictions having targets in gene exons. In the resulting subset, predictions were ranked according to precursor positions and sequence similarity, and to target identity. To further validate FindMiRNA predictions, comparisons to the Arabidopsis genome, to the grape Genoscope genome, and to the grape EST collection were performed. Results were stored in a MySQL database and a web interface was prepared to query the database and retrieve predictions of interest. The GrapeMiRNA database encompasses 5,778 microRNA predictions spanning the whole grape genome. Predictions are integrated with information that can be of use in selection procedures. Tools added in the web interface also allow to inspect predictions according to gene ontology classes and metabolic pathways of targets. The GrapeMiRNA database can be of help in selecting candidate microRNA genes to be validated.
On the New Concept of the Available Water Climatology and Its Application
NASA Astrophysics Data System (ADS)
Byun, H. R.; Kim, D. W.; Choi, K. S.; Deo, R. C.; Lee, S. M.; Park, C. K.; Kwon, S. H.; Kim, G. B.; Kwon, H. N.
2014-12-01
We propose a new concept of climatology called the Available Water Climate (AWC). Available water is 'the remained water usable in every moment' that is calculated regardless of any time intervals or the amounts of precipitation. With this concept, the Available Water Resources Index (AWRI) has been digitized following the earlier work of Byun and Lee (2002). The applicability of AWRI not only to the assessment and prediction of water related disasters but also to the academic researches has been tested. Resulted merits are as follows. Firstly, the threshold value of AWRI for the occurrence of all water related disasters like flood, drought, inundation landslide, and drought each region became clear, therefore the assessment and the prediction of them became much more precise than before. It became clear that the more extreme the AWRI value is, the severer the related disasters become. As example, all disasters caused by heavy rains, even though a small inundation, became predictable at the time step of heavy rain warning with the help of the Long-term remained water index(LWI). As another example, the drought intensity and its dates on start and end are defined with more reasonably and precisely than any other drought indexes with help of the Effective drought index (EDI) using sliding time scale. Secondly, the spatiotemporal distribution of water environment were digitized clearly and objectively using AWRI and new concepts of the Water Abundant Season (WAS) and the Little Water Season (LIWAS), their dates on start and end, and their strength were defined, which is very beneficial for agriculture, forestry, and all other water controls. Also, the differences of water environments among regions were clearly digitized and the improvement of the climate classification by Köppen etc. became possible. Thirdly, other merits will be found continuously afterwards.
Analysis and prediction of leucine-rich nuclear export signals.
la Cour, Tanja; Kiemer, Lars; Mølgaard, Anne; Gupta, Ramneek; Skriver, Karen; Brunak, Søren
2004-06-01
We present a thorough analysis of nuclear export signals and a prediction server, which we have made publicly available. The machine learning prediction method is a significant improvement over the generally used consensus patterns. Nuclear export signals (NESs) are extremely important regulators of the subcellular location of proteins. This regulation has an impact on transcription and other nuclear processes, which are fundamental to the viability of the cell. NESs are studied in relation to cancer, the cell cycle, cell differentiation and other important aspects of molecular biology. Our conclusion from this analysis is that the most important properties of NESs are accessibility and flexibility allowing relevant proteins to interact with the signal. Furthermore, we show that not only the known hydrophobic residues are important in defining a nuclear export signals. We employ both neural networks and hidden Markov models in the prediction algorithm and verify the method on the most recently discovered NESs. The NES predictor (NetNES) is made available for general use at http://www.cbs.dtu.dk/.
NASA Technical Reports Server (NTRS)
MCKissick, Burnell T. (Technical Monitor); Plassman, Gerald E.; Mall, Gerald H.; Quagliano, John R.
2005-01-01
Linear multivariable regression models for predicting day and night Eddy Dissipation Rate (EDR) from available meteorological data sources are defined and validated. Model definition is based on a combination of 1997-2000 Dallas/Fort Worth (DFW) data sources, EDR from Aircraft Vortex Spacing System (AVOSS) deployment data, and regression variables primarily from corresponding Automated Surface Observation System (ASOS) data. Model validation is accomplished through EDR predictions on a similar combination of 1994-1995 Memphis (MEM) AVOSS and ASOS data. Model forms include an intercept plus a single term of fixed optimal power for each of these regression variables; 30-minute forward averaged mean and variance of near-surface wind speed and temperature, variance of wind direction, and a discrete cloud cover metric. Distinct day and night models, regressing on EDR and the natural log of EDR respectively, yield best performance and avoid model discontinuity over day/night data boundaries.
Universal fingerprinting chip server.
Casique-Almazán, Janet; Larios-Serrato, Violeta; Olguín-Ruíz, Gabriela Edith; Sánchez-Vallejo, Carlos Javier; Maldonado-Rodríguez, Rogelio; Méndez-Tenorio, Alfonso
2012-01-01
The Virtual Hybridization approach predicts the most probable hybridization sites across a target nucleic acid of known sequence, including both perfect and mismatched pairings. Potential hybridization sites, having a user-defined minimum number of bases that are paired with the oligonucleotide probe, are first identified. Then free energy values are evaluated for each potential hybridization site, and if it has a calculated free energy of equal or higher negative value than a user-defined free energy cut-off value, it is considered as a site of high probability of hybridization. The Universal Fingerprinting Chip Applications Server contains the software for visualizing predicted hybridization patterns, which yields a simulated hybridization fingerprint that can be compared with experimentally derived fingerprints or with a virtual fingerprint arising from a different sample. The database is available for free at http://bioinformatica.homelinux.org/UFCVH/
Smit, Mathijs G D; Jak, Robbert G; Rye, Henrik; Frost, Tone Karin; Singsaas, Ivar; Karman, Chris C
2008-04-01
In order to improve the ecological status of aquatic systems, both toxic (e.g., chemical) and nontoxic stressors (e.g., suspended particles) should be evaluated. This paper describes an approach to environmental risk assessment of drilling discharges to the sea. These discharges might lead to concentrations of toxic compounds and suspended clay particles in the water compartment and concentrations of toxic compounds, burial of biota, change in sediment structure, and oxygen depletion in marine sediments. The main challenges were to apply existing protocols for environmental risk assessment to nontoxic stressors and to combine risks arising from exposure to these stressors with risk from chemical exposure. The defined approach is based on species sensitivity distributions (SSDs). In addition, precautionary principles from the EU-Technical Guidance Document were incorporated to assure that the method is acceptable in a regulatory context. For all stressors a protocol was defined to construct an SSD for no observed effect concentrations (or levels; NOEC(L)-SSD) to allow for the calculation of the potentially affected fraction of species from predicted exposures. Depending on the availability of data, a NOEC-SSD for toxicants can either be directly based on available NOECs or constructed from the predicted no effect concentration and the variation in sensitivity among species. For nontoxic stressors a NOEL-SSD can be extrapolated from an SSD based on effect or field data. Potentially affected fractions of species at predicted exposures are combined into an overall risk estimate. The developed approach facilitates environmental management of drilling discharges and can be applied to define risk-mitigating measures for both toxic and nontoxic stress.
Beating the news using social media: the case study of American Idol
NASA Astrophysics Data System (ADS)
Ciulla, Fabio; Mocanu, Delia; Baronchelli, Andrea; Goncalves, Bruno; Perra, Nicola; Vespignani, Alessandro
2013-03-01
We present a contribution to the debate on the predictability of social events using big data analytics. We focus on the elimination of contestants in the American Idol TV shows as an example of a well defined electoral phenomenon to assess the predictive power of twitter signals. We provide evidence that Twitter activity during the time span defined by the TV show airing and the voting period following it allows the anticipation of the voting outcome. Twitter data have been analyzed to attempt the winner prediction ahead of the airing of the official result. We also show that the fraction of Tweets that contain geolocation information allows us to map the fanbase of each contestant, both within the US and abroad, showing that strong regional polarizations occur. The geolocalized data are crucial for the correct prediction of the final outcome of the show, pointing out the importance of considering information beyond the aggregated twitter signal. Although American Idol voting is just a minimal and simplified version of complex societal phenomena, this work shows that the volume of information available in online systems permits the real time gathering of quantitative indicators that may be able to anticipate the future unfolding of opinion formation events.
[ProteoСat: a tool for planning of proteomic experiments].
Skvortsov, V S; Alekseychuk, N N; Khudyakov, D V; Mikurova, A V; Rybina, A V; Novikova, S E; Tikhonova, O V
2015-01-01
ProteoCat is a computer program has been designed to help researchers in the planning of large-scale proteomic experiments. The central part of this program is the subprogram of hydrolysis simulation that supports 4 proteases (trypsin, lysine C, endoproteinases AspN and GluC). For the peptides obtained after virtual hydrolysis or loaded from data file a number of properties important in mass-spectrometric experiments can be calculated or predicted. The data can be analyzed or filtered to reduce a set of peptides. The program is using new and improved modification of our methods developed to predict pI and probability of peptide detection; pI can also be predicted for a number of popular pKa's scales, proposed by other investigators. The algorithm for prediction of peptide retention time was realized similar to the algorithm used in the program SSRCalc. ProteoCat can estimate the coverage of amino acid sequences of proteins under defined limitation on peptides detection, as well as the possibility of assembly of peptide fragments with user-defined size of "sticky" ends. The program has a graphical user interface, written on JAVA and available at http://www.ibmc.msk.ru/LPCIT/ProteoCat.
Distinctive Behaviors of Druggable Proteins in Cellular Networks
Workman, Paul; Al-Lazikani, Bissan
2015-01-01
The interaction environment of a protein in a cellular network is important in defining the role that the protein plays in the system as a whole, and thus its potential suitability as a drug target. Despite the importance of the network environment, it is neglected during target selection for drug discovery. Here, we present the first systematic, comprehensive computational analysis of topological, community and graphical network parameters of the human interactome and identify discriminatory network patterns that strongly distinguish drug targets from the interactome as a whole. Importantly, we identify striking differences in the network behavior of targets of cancer drugs versus targets from other therapeutic areas and explore how they may relate to successful drug combinations to overcome acquired resistance to cancer drugs. We develop, computationally validate and provide the first public domain predictive algorithm for identifying druggable neighborhoods based on network parameters. We also make available full predictions for 13,345 proteins to aid target selection for drug discovery. All target predictions are available through canSAR.icr.ac.uk. Underlying data and tools are available at https://cansar.icr.ac.uk/cansar/publications/druggable_network_neighbourhoods/. PMID:26699810
Dennerline, D.E.; Van Den Avyle, M.J.
2000-01-01
Striped bass Morone saxatilis and hybrid bass M. saxatilis x M. chrysops have been stocked to establish fisheries in many US reservoirs, but success has been limited by a poor understanding of relations between prey biomass and predator growth and survival. To define sizes of prey that are morphologically available, we developed predictive relationships between predator length, mouth dimensions, and expected maximum prey size; predictions were then validated using published data on sizes of clupeid prey (Dorosoma spp.) in five US reservoirs. Further, we compared the biomass of prey considered available to predators using two forms of a length-based consumption model - a previously published AP/P ratio and a revised model based on our results. Predictions of maximum prey size using predator GW were consistent with observed prey sizes in US reservoirs. Length of consumed Dorosoma was significantly, but weakly, correlated with predator length in four of the five reservoirs (r2 = 0.006-0.336, P 150 mm TL) were abundant. (C) 2000 Elsevier Science B.V.
a Gaussian Process Based Multi-Person Interaction Model
NASA Astrophysics Data System (ADS)
Klinger, T.; Rottensteiner, F.; Heipke, C.
2016-06-01
Online multi-person tracking in image sequences is commonly guided by recursive filters, whose predictive models define the expected positions of future states. When a predictive model deviates too much from the true motion of a pedestrian, which is often the case in crowded scenes due to unpredicted accelerations, the data association is prone to fail. In this paper we propose a novel predictive model on the basis of Gaussian Process Regression. The model takes into account the motion of every tracked pedestrian in the scene and the prediction is executed with respect to the velocities of all interrelated persons. As shown by the experiments, the model is capable of yielding more plausible predictions even in the presence of mutual occlusions or missing measurements. The approach is evaluated on a publicly available benchmark and outperforms other state-of-the-art trackers.
Towards cleaner combustion engines through groundbreaking detailed chemical kinetic models
Battin-Leclerc, Frédérique; Blurock, Edward; Bounaceur, Roda; Fournet, René; Glaude, Pierre-Alexandre; Herbinet, Olivier; Sirjean, Baptiste; Warth, V.
2013-01-01
In the context of limiting the environmental impact of transportation, this paper reviews new directions which are being followed in the development of more predictive and more accurate detailed chemical kinetic models for the combustion of fuels. In the first part, the performance of current models, especially in terms of the prediction of pollutant formation, is evaluated. In the next parts, recent methods and ways to improve these models are described. An emphasis is given on the development of detailed models based on elementary reactions, on the production of the related thermochemical and kinetic parameters, and on the experimental techniques available to produce the data necessary to evaluate model predictions under well defined conditions. PMID:21597604
High capacity demonstration of honeycomb panel heat pipes
NASA Technical Reports Server (NTRS)
Tanzer, H. J.
1989-01-01
The feasibility of performance enhancing the sandwich panel heat pipe was investigated for moderate temperature range heat rejection radiators on future-high-power spacecraft. The hardware development program consisted of performance prediction modeling, fabrication, ground test, and data correlation. Using available sandwich panel materials, a series of subscale test panels were augumented with high-capacity sideflow and temperature control variable conductance features, and test evaluated for correlation with performance prediction codes. Using the correlated prediction model, a 50-kW full size radiator was defined using methanol working fluid and closely spaced sideflows. A new concept called the hybrid radiator individually optimizes heat pipe components. A 2.44-m long hybrid test vehicle demonstrated proof-of-principle performance.
Assessment of analytical techniques for predicting solid propellant exhaust plumes
NASA Technical Reports Server (NTRS)
Tevepaugh, J. A.; Smith, S. D.; Penny, M. M.
1977-01-01
The calculation of solid propellant exhaust plume flow fields is addressed. Two major areas covered are: (1) the applicability of empirical data currently available to define particle drag coefficients, heat transfer coefficients, mean particle size and particle size distributions, and (2) thermochemical modeling of the gaseous phase of the flow field. Comparisons of experimentally measured and analytically predicted data are made. The experimental data were obtained for subscale solid propellant motors with aluminum loadings of 2, 10 and 15%. Analytical predictions were made using a fully coupled two-phase numerical solution. Data comparisons will be presented for radial distributions at plume axial stations of 5, 12, 16 and 20 diameters.
NASA Technical Reports Server (NTRS)
Wang, John T.; Pineda, Evan J.; Ranatunga, Vipul; Smeltzer, Stanley S.
2015-01-01
A simple continuum damage mechanics (CDM) based 3D progressive damage analysis (PDA) tool for laminated composites was developed and implemented as a user defined material subroutine to link with a commercially available explicit finite element code. This PDA tool uses linear lamina properties from standard tests, predicts damage initiation with an easy-to-implement Hashin-Rotem failure criteria, and in the damage evolution phase, evaluates the degradation of material properties based on the crack band theory and traction-separation cohesive laws. It follows Matzenmiller et al.'s formulation to incorporate the degrading material properties into the damaged stiffness matrix. Since nonlinear shear and matrix stress-strain relations are not implemented, correction factors are used for slowing the reduction of the damaged shear stiffness terms to reflect the effect of these nonlinearities on the laminate strength predictions. This CDM based PDA tool is implemented as a user defined material (VUMAT) to link with the Abaqus/Explicit code. Strength predictions obtained, using this VUMAT, are correlated with test data for a set of notched specimens under tension and compression loads.
Predicting redox conditions in groundwater at a regional scale
Tesoriero, Anthony J.; Terziotti, Silvia; Abrams, Daniel B.
2015-01-01
Defining the oxic-suboxic interface is often critical for determining pathways for nitrate transport in groundwater and to streams at the local scale. Defining this interface on a regional scale is complicated by the spatial variability of reaction rates. The probability of oxic groundwater in the Chesapeake Bay watershed was predicted by relating dissolved O2 concentrations in groundwater samples to indicators of residence time and/or electron donor availability using logistic regression. Variables that describe surficial geology, position in the flow system, and soil drainage were important predictors of oxic water. The probability of encountering oxic groundwater at a 30 m depth and the depth to the bottom of the oxic layer were predicted for the Chesapeake Bay watershed. The influence of depth to the bottom of the oxic layer on stream nitrate concentrations and time lags (i.e., time period between land application of nitrogen and its effect on streams) are illustrated using model simulations for hypothetical basins. Regional maps of the probability of oxic groundwater should prove useful as indicators of groundwater susceptibility and stream susceptibility to contaminant sources derived from groundwater.
Ezendam, Janine; Braakhuis, Hedwig M; Vandebriel, Rob J
2016-12-01
The hazard assessment of skin sensitizers relies mainly on animal testing, but much progress is made in the development, validation and regulatory acceptance and implementation of non-animal predictive approaches. In this review, we provide an update on the available computational tools and animal-free test methods for the prediction of skin sensitization hazard. These individual test methods address mostly one mechanistic step of the process of skin sensitization induction. The adverse outcome pathway (AOP) for skin sensitization describes the key events (KEs) that lead to skin sensitization. In our review, we have clustered the available test methods according to the KE they inform: the molecular initiating event (MIE/KE1)-protein binding, KE2-keratinocyte activation, KE3-dendritic cell activation and KE4-T cell activation and proliferation. In recent years, most progress has been made in the development and validation of in vitro assays that address KE2 and KE3. No standardized in vitro assays for T cell activation are available; thus, KE4 cannot be measured in vitro. Three non-animal test methods, addressing either the MIE, KE2 or KE3, are accepted as OECD test guidelines, and this has accelerated the development of integrated or defined approaches for testing and assessment (e.g. testing strategies). The majority of these approaches are mechanism-based, since they combine results from multiple test methods and/or computational tools that address different KEs of the AOP to estimate skin sensitization potential and sometimes potency. Other approaches are based on statistical tools. Until now, eleven different testing strategies have been published, the majority using the same individual information sources. Our review shows that some of the defined approaches to testing and assessment are able to accurately predict skin sensitization hazard, sometimes even more accurate than the currently used animal test. A few defined approaches are developed to provide an estimate of the potency sub-category of a skin sensitizer as well, but these approaches need further independent evaluation with a new dataset of chemicals. To conclude, this update shows that the field of non-animal approaches for skin sensitization has evolved greatly in recent years and that it is possible to predict skin sensitization hazard without animal testing.
Automatic prediction of protein domains from sequence information using a hybrid learning system.
Nagarajan, Niranjan; Yona, Golan
2004-06-12
We describe a novel method for detecting the domain structure of a protein from sequence information alone. The method is based on analyzing multiple sequence alignments that are derived from a database search. Multiple measures are defined to quantify the domain information content of each position along the sequence and are combined into a single predictor using a neural network. The output is further smoothed and post-processed using a probabilistic model to predict the most likely transition positions between domains. The method was assessed using the domain definitions in SCOP and CATH for proteins of known structure and was compared with several other existing methods. Our method performs well both in terms of accuracy and sensitivity. It improves significantly over the best methods available, even some of the semi-manual ones, while being fully automatic. Our method can also be used to suggest and verify domain partitions based on structural data. A few examples of predicted domain definitions and alternative partitions, as suggested by our method, are also discussed. An online domain-prediction server is available at http://biozon.org/tools/domains/
Drifter-based estimate of the 5 year dispersal of Fukushima-derived radionuclides
NASA Astrophysics Data System (ADS)
Rypina, I. I.; Jayne, S. R.; Yoshida, S.; Macdonald, A. M.; Buesseler, K.
2014-11-01
Employing some 40 years of North Pacific drifter-track observations from the Global Drifter Program database, statistics defining the horizontal spread of radionuclides from Fukushima nuclear power plant into the Pacific Ocean are investigated over a time scale of 5 years. A novel two-iteration method is employed to make the best use of the available drifter data. Drifter-based predictions of the temporal progression of the leading edge of the radionuclide distribution are compared to observed radionuclide concentrations from research surveys occupied in 2012 and 2013. Good agreement between the drifter-based predictions and the observations is found.
Prediction and explanation in the multiverse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garriga, J.; Vilenkin, A.
2008-02-15
Probabilities in the multiverse can be calculated by assuming that we are typical representatives in a given reference class. But is this class well defined? What should be included in the ensemble in which we are supposed to be typical? There is a widespread belief that this question is inherently vague, and that there are various possible choices for the types of reference objects which should be counted in. Here we argue that the 'ideal' reference class (for the purpose of making predictions) can be defined unambiguously in a rather precise way, as the set of all observers with identicalmore » information content. When the observers in a given class perform an experiment, the class branches into subclasses who learn different information from the outcome of that experiment. The probabilities for the different outcomes are defined as the relative numbers of observers in each subclass. For practical purposes, wider reference classes can be used, where we trace over all information which is uncorrelated to the outcome of the experiment, or whose correlation with it is beyond our current understanding. We argue that, once we have gathered all practically available evidence, the optimal strategy for making predictions is to consider ourselves typical in any reference class we belong to, unless we have evidence to the contrary. In the latter case, the class must be correspondingly narrowed.« less
High accuracy operon prediction method based on STRING database scores.
Taboada, Blanca; Verde, Cristina; Merino, Enrique
2010-07-01
We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/.
Wagner, Christian; Pan, Yuzhuo; Hsu, Vicky; Grillo, Joseph A; Zhang, Lei; Reynolds, Kellie S; Sinha, Vikram; Zhao, Ping
2015-01-01
The US Food and Drug Administration (FDA) has seen a recent increase in the application of physiologically based pharmacokinetic (PBPK) modeling towards assessing the potential of drug-drug interactions (DDI) in clinically relevant scenarios. To continue our assessment of such approaches, we evaluated the predictive performance of PBPK modeling in predicting cytochrome P450 (CYP)-mediated DDI. This evaluation was based on 15 substrate PBPK models submitted by nine sponsors between 2009 and 2013. For these 15 models, a total of 26 DDI studies (cases) with various CYP inhibitors were available. Sponsors developed the PBPK models, reportedly without considering clinical DDI data. Inhibitor models were either developed by sponsors or provided by PBPK software developers and applied with minimal or no modification. The metric for assessing predictive performance of the sponsors' PBPK approach was the R predicted/observed value (R predicted/observed = [predicted mean exposure ratio]/[observed mean exposure ratio], with the exposure ratio defined as [C max (maximum plasma concentration) or AUC (area under the plasma concentration-time curve) in the presence of CYP inhibition]/[C max or AUC in the absence of CYP inhibition]). In 81 % (21/26) and 77 % (20/26) of cases, respectively, the R predicted/observed values for AUC and C max ratios were within a pre-defined threshold of 1.25-fold of the observed data. For all cases, the R predicted/observed values for AUC and C max were within a 2-fold range. These results suggest that, based on the submissions to the FDA to date, there is a high degree of concordance between PBPK-predicted and observed effects of CYP inhibition, especially CYP3A-based, on the exposure of drug substrates.
Vaphiades, Michael S.; Kline, Lanning B.; McGwin, Gerald; Owsley, Cynthia; Shah, Ritu; Wood, Joanne M.
2014-01-01
Background. This study aimed to determine whether it is possible to predict driving safety of individuals with homonymous hemianopia or quadrantanopia based upon a clinical review of neuroimages that are routinely available in clinical practice. Methods. Two experienced neuroophthalmologists viewed a summary report of the CT/MRI scans of 16 participants with homonymous hemianopic or quadrantanopic field defects which indicated the site and extent of the lesion and they made predictions regarding whether participants would be safe/unsafe to drive. Driving safety was independently defined at the time of the study using state-recorded motor vehicle crashes (all crashes and at-fault) for the previous 5 years and ratings of driving safety determined through a standardized on-road driving assessment by a certified driving rehabilitation specialist. Results. The ability to predict driving safety was highly variable regardless of the driving safety measure, ranging from 31% to 63% (kappa levels ranged from −0.29 to 0.04). The level of agreement between the neuroophthalmologists was only fair (kappa = 0.28). Conclusions. Clinical evaluation of summary reports of currently available neuroimages by neuroophthalmologists is not predictive of driving safety. Future research should be directed at identifying and/or developing alternative tests or strategies to better enable clinicians to make these predictions. PMID:24683493
Vaphiades, Michael S; Kline, Lanning B; McGwin, Gerald; Owsley, Cynthia; Shah, Ritu; Wood, Joanne M
2014-01-01
Background. This study aimed to determine whether it is possible to predict driving safety of individuals with homonymous hemianopia or quadrantanopia based upon a clinical review of neuroimages that are routinely available in clinical practice. Methods. Two experienced neuroophthalmologists viewed a summary report of the CT/MRI scans of 16 participants with homonymous hemianopic or quadrantanopic field defects which indicated the site and extent of the lesion and they made predictions regarding whether participants would be safe/unsafe to drive. Driving safety was independently defined at the time of the study using state-recorded motor vehicle crashes (all crashes and at-fault) for the previous 5 years and ratings of driving safety determined through a standardized on-road driving assessment by a certified driving rehabilitation specialist. Results. The ability to predict driving safety was highly variable regardless of the driving safety measure, ranging from 31% to 63% (kappa levels ranged from -0.29 to 0.04). The level of agreement between the neuroophthalmologists was only fair (kappa = 0.28). Conclusions. Clinical evaluation of summary reports of currently available neuroimages by neuroophthalmologists is not predictive of driving safety. Future research should be directed at identifying and/or developing alternative tests or strategies to better enable clinicians to make these predictions.
Staley, Dennis M.; Negri, Jacquelyn; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.
2017-01-01
Early warning of post-fire debris-flow occurrence during intense rainfall has traditionally relied upon a library of regionally specific empirical rainfall intensity–duration thresholds. Development of this library and the calculation of rainfall intensity-duration thresholds often require several years of monitoring local rainfall and hydrologic response to rainstorms, a time-consuming approach where results are often only applicable to the specific region where data were collected. Here, we present a new, fully predictive approach that utilizes rainfall, hydrologic response, and readily available geospatial data to predict rainfall intensity–duration thresholds for debris-flow generation in recently burned locations in the western United States. Unlike the traditional approach to defining regional thresholds from historical data, the proposed methodology permits the direct calculation of rainfall intensity–duration thresholds for areas where no such data exist. The thresholds calculated by this method are demonstrated to provide predictions that are of similar accuracy, and in some cases outperform, previously published regional intensity–duration thresholds. The method also provides improved predictions of debris-flow likelihood, which can be incorporated into existing approaches for post-fire debris-flow hazard assessment. Our results also provide guidance for the operational expansion of post-fire debris-flow early warning systems in areas where empirically defined regional rainfall intensity–duration thresholds do not currently exist.
NASA Astrophysics Data System (ADS)
Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.
2017-02-01
Early warning of post-fire debris-flow occurrence during intense rainfall has traditionally relied upon a library of regionally specific empirical rainfall intensity-duration thresholds. Development of this library and the calculation of rainfall intensity-duration thresholds often require several years of monitoring local rainfall and hydrologic response to rainstorms, a time-consuming approach where results are often only applicable to the specific region where data were collected. Here, we present a new, fully predictive approach that utilizes rainfall, hydrologic response, and readily available geospatial data to predict rainfall intensity-duration thresholds for debris-flow generation in recently burned locations in the western United States. Unlike the traditional approach to defining regional thresholds from historical data, the proposed methodology permits the direct calculation of rainfall intensity-duration thresholds for areas where no such data exist. The thresholds calculated by this method are demonstrated to provide predictions that are of similar accuracy, and in some cases outperform, previously published regional intensity-duration thresholds. The method also provides improved predictions of debris-flow likelihood, which can be incorporated into existing approaches for post-fire debris-flow hazard assessment. Our results also provide guidance for the operational expansion of post-fire debris-flow early warning systems in areas where empirically defined regional rainfall intensity-duration thresholds do not currently exist.
Machine learning landscapes and predictions for patient outcomes
NASA Astrophysics Data System (ADS)
Das, Ritankar; Wales, David J.
2017-07-01
The theory and computational tools developed to interpret and explore energy landscapes in molecular science are applied to the landscapes defined by local minima for neural networks. These machine learning landscapes correspond to fits of training data, where the inputs are vital signs and laboratory measurements for a database of patients, and the objective is to predict a clinical outcome. In this contribution, we test the predictions obtained by fitting to single measurements, and then to combinations of between 2 and 10 different patient medical data items. The effect of including measurements over different time intervals from the 48 h period in question is analysed, and the most recent values are found to be the most important. We also compare results obtained for neural networks as a function of the number of hidden nodes, and for different values of a regularization parameter. The predictions are compared with an alternative convex fitting function, and a strong correlation is observed. The dependence of these results on the patients randomly selected for training and testing decreases systematically with the size of the database available. The machine learning landscapes defined by neural network fits in this investigation have single-funnel character, which probably explains why it is relatively straightforward to obtain the global minimum solution, or a fit that behaves similarly to this optimal parameterization.
Computational approaches to define a human milk metaglycome
Agravat, Sanjay B.; Song, Xuezheng; Rojsajjakul, Teerapat; Cummings, Richard D.; Smith, David F.
2016-01-01
Motivation: The goal of deciphering the human glycome has been hindered by the lack of high-throughput sequencing methods for glycans. Although mass spectrometry (MS) is a key technology in glycan sequencing, MS alone provides limited information about the identification of monosaccharide constituents, their anomericity and their linkages. These features of individual, purified glycans can be partly identified using well-defined glycan-binding proteins, such as lectins and antibodies that recognize specific determinants within glycan structures. Results: We present a novel computational approach to automate the sequencing of glycans using metadata-assisted glycan sequencing, which combines MS analyses with glycan structural information from glycan microarray technology. Success in this approach was aided by the generation of a ‘virtual glycome’ to represent all potential glycan structures that might exist within a metaglycomes based on a set of biosynthetic assumptions using known structural information. We exploited this approach to deduce the structures of soluble glycans within the human milk glycome by matching predicted structures based on experimental data against the virtual glycome. This represents the first meta-glycome to be defined using this method and we provide a publically available web-based application to aid in sequencing milk glycans. Availability and implementation: http://glycomeseq.emory.edu Contact: sagravat@bidmc.harvard.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26803164
Zones of life in the subsurface of hydrothermal vents: A synthesis
NASA Astrophysics Data System (ADS)
Larson, B. I.; Houghton, J.; Meile, C. D.
2011-12-01
Subsurface microbial communities in Mid-ocean Ridge (MOR) hydrothermal systems host a wide array of unique metabolic strategies, but the spatial distribution of biogeochemical transformations is poorly constrained. Here we present an approach that reexamines chemical measurements from diffuse fluids with models of convective transport to delineate likely reaction zones. Chemical data have been compiled from bare basalt surfaces at a wide array of mid-ocean ridge systems, including 9°N, East Pacific Rise, Axial Seamount, Juan de Fuca, and Lucky Strike, Mid-Atlantic Ridge. Co-sampled end-member fluid from Ty (EPR) was used to constrain reaction path models that define diffuse fluid compositions as a function of temperature. The degree of mixing between hot vent fluid (350 deg. C) and seawater (2 deg. C) governs fluid temperature, Fe-oxide mineral precipitation is suppressed, and aqueous redox reactions are prevented from equilibrating, consistent with sluggish kinetics. Quartz and pyrite are predicted to precipitate, consistent with field observations. Most reported samples of diffuse fluids from EPR and Axial Seamount fall along the same predicted mixing line only when pyrite precipitation is suppressed, but Lucky Strike fluids do not follow the same trend. The predicted fluid composition as a function of temperature is then used to calculate the free energy available to autotrophic microorganisms for a variety of catabolic strategies in the subsurface. Finally, the relationships between temperature and free energy is combined with modeled temperature fields (Lowell et al., 2007 Geochem. Geophys., Geosys.) over a 500 m x 500 m region extending downward from the seafloor and outward from the high temperature focused hydrothermal flow to define areas that are energetically most favorable for a given metabolic process as well as below the upper temperature limit for life (~120 deg. C). In this way, we can expand the relevance of geochemical model predictions of bioenergetics by predicting functionally-defined 'Zones of Life' and placing them spatially within the boundary of the 120 deg. C isotherm, estimating the extent of subsurface biosphere beneath mid-ocean ridge hydrothermal systems. Preliminary results indicate that methanogenesis yields the most energy per kg of vent fluid, consistent with the elevated CH4(aq) seen at all three sites, but may be constrained by temperatures too hot for microbial life while available energy from the oxidation of Fe(II) peaks near regions of the crust that are more hospitable.
The practice of prediction: What can ecologists learn from applied, ecology-related fields?
Pennekamp, Frank; Adamson, Matthew; Petchey, Owen L; Poggiale, Jean-Christophe; Aguiar, Maira; Kooi, Bob W.; Botkin, Daniel B.; DeAngelis, Donald L.
2017-01-01
The pervasive influence of human induced global environmental change affects biodiversity across the globe, and there is great uncertainty as to how the biosphere will react on short and longer time scales. To adapt to what the future holds and to manage the impacts of global change, scientists need to predict the expected effects with some confidence and communicate these predictions to policy makers. However, recent reviews found that we currently lack a clear understanding of how predictable ecology is, with views seeing it as mostly unpredictable to potentially predictable, at least over short time frames. However, in applied, ecology-related fields predictions are more commonly formulated and reported, as well as evaluated in hindsight, potentially allowing one to define baselines of predictive proficiency in these fields. We searched the literature for representative case studies in these fields and collected information about modeling approaches, target variables of prediction, predictive proficiency achieved, as well as the availability of data to parameterize predictive models. We find that some fields such as epidemiology achieve high predictive proficiency, but even in the more predictive fields proficiency is evaluated in different ways. Both phenomenological and mechanistic approaches are used in most fields, but differences are often small, with no clear superiority of one approach over the other. Data availability is limiting in most fields, with long-term studies being rare and detailed data for parameterizing mechanistic models being in short supply. We suggest that ecologists adopt a more rigorous approach to report and assess predictive proficiency, and embrace the challenges of real world decision making to strengthen the practice of prediction in ecology.
Gupta, Nidhi; Heiden, Marina; Mathiassen, Svend Erik; Holtermann, Andreas
2016-05-01
We aimed at developing and evaluating statistical models predicting objectively measured occupational time spent sedentary or in physical activity from self-reported information available in large epidemiological studies and surveys. Two-hundred-and-fourteen blue-collar workers responded to a questionnaire containing information about personal and work related variables, available in most large epidemiological studies and surveys. Workers also wore accelerometers for 1-4 days measuring time spent sedentary and in physical activity, defined as non-sedentary time. Least-squares linear regression models were developed, predicting objectively measured exposures from selected predictors in the questionnaire. A full prediction model based on age, gender, body mass index, job group, self-reported occupational physical activity (OPA), and self-reported occupational sedentary time (OST) explained 63% (R (2)adjusted) of the variance of both objectively measured time spent sedentary and in physical activity since these two exposures were complementary. Single-predictor models based only on self-reported information about either OPA or OST explained 21% and 38%, respectively, of the variance of the objectively measured exposures. Internal validation using bootstrapping suggested that the full and single-predictor models would show almost the same performance in new datasets as in that used for modelling. Both full and single-predictor models based on self-reported information typically available in most large epidemiological studies and surveys were able to predict objectively measured occupational time spent sedentary or in physical activity, with explained variances ranging from 21-63%.
Examination of Solar Cycle Statistical Model and New Prediction of Solar Cycle 23
NASA Technical Reports Server (NTRS)
Kim, Myung-Hee Y.; Wilson, John W.
2000-01-01
Sunspot numbers in the current solar cycle 23 were estimated by using a statistical model with the accumulating cycle sunspot data based on the odd-even behavior of historical sunspot cycles from 1 to 22. Since cycle 23 has progressed and the accurate solar minimum occurrence has been defined, the statistical model is validated by comparing the previous prediction with the new measured sunspot number; the improved sunspot projection in short range of future time is made accordingly. The current cycle is expected to have a moderate level of activity. Errors of this model are shown to be self-correcting as cycle observations become available.
Baseline predictability of daily east Asian summer monsoon circulation indices
NASA Astrophysics Data System (ADS)
Ai, Shucong; Chen, Quanliang; Li, Jianping; Ding, Ruiqiang; Zhong, Quanjia
2017-05-01
The nonlinear local Lyapunov exponent (NLLE) method is adopted to quantitatively determine the predictability limit of East Asian summer monsoon (EASM) intensity indices on a synoptic timescale. The predictability limit of EASM indices varies widely according to the definitions of indices. EASM indices defined by zonal shear have a limit of around 7 days, which is higher than the predictability limit of EASM indices defined by sea level pressure (SLP) difference and meridional wind shear (about 5 days). The initial error of EASM indices defined by SLP difference and meridional wind shear shows a faster growth than indices defined by zonal wind shear. Furthermore, the indices defined by zonal wind shear appear to fluctuate at lower frequencies, whereas the indices defined by SLP difference and meridional wind shear generally fluctuate at higher frequencies. This result may explain why the daily variability of the EASM indices defined by zonal wind shear tends be more predictable than those defined by SLP difference and meridional wind shear. Analysis of the temporal correlation coefficient (TCC) skill for EASM indices obtained from observations and from NCEP's Global Ensemble Forecasting System (GEFS) historical weather forecast dataset shows that GEFS has a higher forecast skill for the EASM indices defined by zonal wind shear than for indices defined by SLP difference and meridional wind shear. The predictability limit estimated by the NLLE method is shorter than that in GEFS. In addition, the June-September average TCC skill for different daily EASM indices shows significant interannual variations from 1985 to 2015 in GEFS. However, the TCC for different types of EASM indices does not show coherent interannual fluctuations.
Standardization of a Hierarchical Transactive Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammerstrom, Donald J.; Oliver, Terry V.; Melton, Ronald B.
2010-12-03
The authors describe work they have conducted toward the generalization and standardization of the transactive control approach that was first demonstrated in the Olympic Peninsula Project for the management of a transmission constraint. The newly generalized approach addresses several potential shortfalls of the prior approach: First, the authors have formalized a hierarchical node structure which defines the nodes and the functional signal pathways between these nodes. Second, by fully generalizing the inputs, outputs, and functional responsibilities of each node, the authors make the approach available to a much wider set of responsive assets and operational objectives. Third, the new, generalizedmore » approach defines transactive signals that include the predicted day-ahead future. This predictive feature allows the market-like bids and offers to become resolved iteratively over time, thus allowing the behaviors of responsive assets to be called upon both for the present and as future dispatch decisions are being made. The hierarchical transactive control approach is a key feature of a proposed Pacific Northwest smart grid demonstration.« less
A cis-regulatory logic simulator.
Zeigler, Robert D; Gertz, Jason; Cohen, Barak A
2007-07-27
A major goal of computational studies of gene regulation is to accurately predict the expression of genes based on the cis-regulatory content of their promoters. The development of computational methods to decode the interactions among cis-regulatory elements has been slow, in part, because it is difficult to know, without extensive experimental validation, whether a particular method identifies the correct cis-regulatory interactions that underlie a given set of expression data. There is an urgent need for test expression data in which the interactions among cis-regulatory sites that produce the data are known. The ability to rapidly generate such data sets would facilitate the development and comparison of computational methods that predict gene expression patterns from promoter sequence. We developed a gene expression simulator which generates expression data using user-defined interactions between cis-regulatory sites. The simulator can incorporate additive, cooperative, competitive, and synergistic interactions between regulatory elements. Constraints on the spacing, distance, and orientation of regulatory elements and their interactions may also be defined and Gaussian noise can be added to the expression values. The simulator allows for a data transformation that simulates the sigmoid shape of expression levels from real promoters. We found good agreement between sets of simulated promoters and predicted regulatory modules from real expression data. We present several data sets that may be useful for testing new methodologies for predicting gene expression from promoter sequence. We developed a flexible gene expression simulator that rapidly generates large numbers of simulated promoters and their corresponding transcriptional output based on specified interactions between cis-regulatory sites. When appropriate rule sets are used, the data generated by our simulator faithfully reproduces experimentally derived data sets. We anticipate that using simulated gene expression data sets will facilitate the direct comparison of computational strategies to predict gene expression from promoter sequence. The source code is available online and as additional material. The test sets are available as additional material.
On the Gause predator-prey model with a refuge: a fresh look at the history.
Křivan, Vlastimil
2011-04-07
This article re-analyses a prey-predator model with a refuge introduced by one of the founders of population ecology Gause and his co-workers to explain discrepancies between their observations and predictions of the Lotka-Volterra prey-predator model. They replaced the linear functional response used by Lotka and Volterra by a saturating functional response with a discontinuity at a critical prey density. At concentrations below this critical density prey were effectively in a refuge while at a higher densities they were available to predators. Thus, their functional response was of the Holling type III. They analyzed this model and predicted existence of a limit cycle in predator-prey dynamics. In this article I show that their model is ill posed, because trajectories are not well defined. Using the Filippov method, I define and analyze solutions of the Gause model. I show that depending on parameter values, there are three possibilities: (1) trajectories converge to a limit cycle, as predicted by Gause, (2) trajectories converge to an equilibrium, or (3) the prey population escapes predator control and grows to infinity. Copyright © 2011 Elsevier Ltd. All rights reserved.
Tools4miRs – one place to gather all the tools for miRNA analysis
Lukasik, Anna; Wójcikowski, Maciej; Zielenkiewicz, Piotr
2016-01-01
Summary: MiRNAs are short, non-coding molecules that negatively regulate gene expression and thereby play several important roles in living organisms. Dozens of computational methods for miRNA-related research have been developed, which greatly differ in various aspects. The substantial availability of difficult-to-compare approaches makes it challenging for the user to select a proper tool and prompts the need for a solution that will collect and categorize all the methods. Here, we present tools4miRs, the first platform that gathers currently more than 160 methods for broadly defined miRNA analysis. The collected tools are classified into several general and more detailed categories in which the users can additionally filter the available methods according to their specific research needs, capabilities and preferences. Tools4miRs is also a web-based target prediction meta-server that incorporates user-designated target prediction methods into the analysis of user-provided data. Availability and Implementation: Tools4miRs is implemented in Python using Django and is freely available at tools4mirs.org. Contact: piotr@ibb.waw.pl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153626
García-Jiménez, Beatriz; Pons, Tirso; Sanchis, Araceli; Valencia, Alfonso
2014-01-01
Biological pathways are important elements of systems biology and in the past decade, an increasing number of pathway databases have been set up to document the growing understanding of complex cellular processes. Although more genome-sequence data are becoming available, a large fraction of it remains functionally uncharacterized. Thus, it is important to be able to predict the mapping of poorly annotated proteins to original pathway models. We have developed a Relational Learning-based Extension (RLE) system to investigate pathway membership through a function prediction approach that mainly relies on combinations of simple properties attributed to each protein. RLE searches for proteins with molecular similarities to specific pathway components. Using RLE, we associated 383 uncharacterized proteins to 28 pre-defined human Reactome pathways, demonstrating relative confidence after proper evaluation. Indeed, in specific cases manual inspection of the database annotations and the related literature supported the proposed classifications. Examples of possible additional components of the Electron transport system, Telomere maintenance and Integrin cell surface interactions pathways are discussed in detail. All the human predicted proteins in the 2009 and 2012 releases 30 and 40 of Reactome are available at http://rle.bioinfo.cnio.es.
Non-animal methods to predict skin sensitization (II): an assessment of defined approaches *.
Kleinstreuer, Nicole C; Hoffmann, Sebastian; Alépée, Nathalie; Allen, David; Ashikaga, Takao; Casey, Warren; Clouet, Elodie; Cluzel, Magalie; Desprez, Bertrand; Gellatly, Nichola; Göbel, Carsten; Kern, Petra S; Klaric, Martina; Kühnl, Jochen; Martinozzi-Teissier, Silvia; Mewes, Karsten; Miyazawa, Masaaki; Strickland, Judy; van Vliet, Erwin; Zang, Qingda; Petersohn, Dirk
2018-05-01
Skin sensitization is a toxicity endpoint of widespread concern, for which the mechanistic understanding and concurrent necessity for non-animal testing approaches have evolved to a critical juncture, with many available options for predicting sensitization without using animals. Cosmetics Europe and the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods collaborated to analyze the performance of multiple non-animal data integration approaches for the skin sensitization safety assessment of cosmetics ingredients. The Cosmetics Europe Skin Tolerance Task Force (STTF) collected and generated data on 128 substances in multiple in vitro and in chemico skin sensitization assays selected based on a systematic assessment by the STTF. These assays, together with certain in silico predictions, are key components of various non-animal testing strategies that have been submitted to the Organization for Economic Cooperation and Development as case studies for skin sensitization. Curated murine local lymph node assay (LLNA) and human skin sensitization data were used to evaluate the performance of six defined approaches, comprising eight non-animal testing strategies, for both hazard and potency characterization. Defined approaches examined included consensus methods, artificial neural networks, support vector machine models, Bayesian networks, and decision trees, most of which were reproduced using open source software tools. Multiple non-animal testing strategies incorporating in vitro, in chemico, and in silico inputs demonstrated equivalent or superior performance to the LLNA when compared to both animal and human data for skin sensitization.
Dang, Mia; Ramsaran, Kalinda D; Street, Melissa E; Syed, S Noreen; Barclay-Goddard, Ruth; Stratford, Paul W; Miller, Patricia A
2011-01-01
To estimate the predictive accuracy and clinical usefulness of the Chedoke-McMaster Stroke Assessment (CMSA) predictive equations. A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from -0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted.
Applicability of empirical data currently used in predicting solid propellant exhaust plumes
NASA Technical Reports Server (NTRS)
Tevepaugh, J. A.; Smith, S. D.; Penny, M. M.; Greenwood, T.; Roberts, B. B.
1977-01-01
Theoretical and experimental approaches to exhaust plume analysis are compared. A two-phase model is extended to include treatment of reacting gas chemistry, and thermodynamical modeling of the gaseous phase of the flow field is considered. The applicability of empirical data currently available to define particle drag coefficients, heat transfer coefficients, mean particle size, and particle size distributions is investigated. Experimental and analytical comparisons are presented for subscale solid rocket motors operating at three altitudes with attention to pitot total pressure and stagnation point heating rate measurements. The mathematical treatment input requirements are explained. The two-phase flow field solution adequately predicts gasdynamic properties in the inviscid portion of two-phase exhaust plumes. It is found that prediction of exhaust plume gas pressures requires an adequate model of flow field dynamics.
Figeys, Daniel; Fai, Stephen; Bennett, Steffany A. L.
2013-01-01
Motivation: Establishing phospholipid identities in large lipidomic datasets is a labour-intensive process. Where genomics and proteomics capitalize on sequence-based signatures, glycerophospholipids lack easily definable molecular fingerprints. Carbon chain length, degree of unsaturation, linkage, and polar head group identity must be calculated from mass to charge (m/z) ratios under defined mass spectrometry (MS) conditions. Given increasing MS sensitivity, many m/z values are not represented in existing prediction engines. To address this need, Visualization and Phospholipid Identification is a web-based application that returns all theoretically possible phospholipids for any m/z value and MS condition. Visualization algorithms produce multiple chemical structure files for each species. Curated lipids detected by the Canadian Institutes of Health Research Training Program in Neurodegenerative Lipidomics are provided as high-resolution structures. Availability: VaLID is available through the Canadian Institutes of Health Research Training Program in Neurodegenerative Lipidomics resources web site at https://www.med.uottawa.ca/lipidomics/resources.html. Contacts: lipawrd@uottawa.ca Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:23162086
G-cimp status prediction of glioblastoma samples using mRNA expression data.
Baysan, Mehmet; Bozdag, Serdar; Cam, Margaret C; Kotliarova, Svetlana; Ahn, Susie; Walling, Jennifer; Killian, Jonathan K; Stevenson, Holly; Meltzer, Paul; Fine, Howard A
2012-01-01
Glioblastoma Multiforme (GBM) is a tumor with high mortality and no known cure. The dramatic molecular and clinical heterogeneity seen in this tumor has led to attempts to define genetically similar subgroups of GBM with the hope of developing tumor specific therapies targeted to the unique biology within each of these subgroups. Recently, a subset of relatively favorable prognosis GBMs has been identified. These glioma CpG island methylator phenotype, or G-CIMP tumors, have distinct genomic copy number aberrations, DNA methylation patterns, and (mRNA) expression profiles compared to other GBMs. While the standard method for identifying G-CIMP tumors is based on genome-wide DNA methylation data, such data is often not available compared to the more widely available gene expression data. In this study, we have developed and evaluated a method to predict the G-CIMP status of GBM samples based solely on gene expression data.
G-Cimp Status Prediction Of Glioblastoma Samples Using mRNA Expression Data
Baysan, Mehmet; Bozdag, Serdar; Cam, Margaret C.; Kotliarova, Svetlana; Ahn, Susie; Walling, Jennifer; Killian, Jonathan K.; Stevenson, Holly; Meltzer, Paul; Fine, Howard A.
2012-01-01
Glioblastoma Multiforme (GBM) is a tumor with high mortality and no known cure. The dramatic molecular and clinical heterogeneity seen in this tumor has led to attempts to define genetically similar subgroups of GBM with the hope of developing tumor specific therapies targeted to the unique biology within each of these subgroups. Recently, a subset of relatively favorable prognosis GBMs has been identified. These glioma CpG island methylator phenotype, or G-CIMP tumors, have distinct genomic copy number aberrations, DNA methylation patterns, and (mRNA) expression profiles compared to other GBMs. While the standard method for identifying G-CIMP tumors is based on genome-wide DNA methylation data, such data is often not available compared to the more widely available gene expression data. In this study, we have developed and evaluated a method to predict the G-CIMP status of GBM samples based solely on gene expression data. PMID:23139755
Determination of Fracture Parameters for Multiple Cracks of Laminated Composite Finite Plate
NASA Astrophysics Data System (ADS)
Srivastava, Amit Kumar; Arora, P. K.; Srivastava, Sharad Chandra; Kumar, Harish; Lohumi, M. K.
2018-04-01
A predictive method for estimation of stress state at zone of crack tip and assessment of remaining component lifetime depend on the stress intensity factor (SIF). This paper discusses the numerical approach for prediction of first ply failure load (FL), progressive failure load, SIF and critical SIF for multiple cracks configurations of laminated composite finite plate using finite element method (FEM). The Hashin and Chang failure criterion are incorporated in ABAQUS using subroutine approach user defined field variables (USDFLD) for prediction of progressive fracture response of laminated composite finite plate, which is not directly available in the software. A tensile experiment on laminated composite finite plate with stress concentration is performed to validate the numerically predicted subroutine results, shows excellent agreement. The typical results are presented to examine effect of changing the crack tip distance (S), crack offset distance (H), and stacking fiber angle (θ) on FL, and SIF .
Sun, Jiangming; Carlsson, Lars; Ahlberg, Ernst; Norinder, Ulf; Engkvist, Ola; Chen, Hongming
2017-07-24
Conformal prediction has been proposed as a more rigorous way to define prediction confidence compared to other application domain concepts that have earlier been used for QSAR modeling. One main advantage of such a method is that it provides a prediction region potentially with multiple predicted labels, which contrasts to the single valued (regression) or single label (classification) output predictions by standard QSAR modeling algorithms. Standard conformal prediction might not be suitable for imbalanced data sets. Therefore, Mondrian cross-conformal prediction (MCCP) which combines the Mondrian inductive conformal prediction with cross-fold calibration sets has been introduced. In this study, the MCCP method was applied to 18 publicly available data sets that have various imbalance levels varying from 1:10 to 1:1000 (ratio of active/inactive compounds). Our results show that MCCP in general performed well on bioactivity data sets with various imbalance levels. More importantly, the method not only provides confidence of prediction and prediction regions compared to standard machine learning methods but also produces valid predictions for the minority class. In addition, a compound similarity based nonconformity measure was investigated. Our results demonstrate that although it gives valid predictions, its efficiency is much worse than that of model dependent metrics.
Stochastic Plume Simulations for the Fukushima Accident and the Deep Water Horizon Oil Spill
NASA Astrophysics Data System (ADS)
Coelho, E.; Peggion, G.; Rowley, C.; Hogan, P.
2012-04-01
The Fukushima Dai-ichi power plant suffered damage leading to radioactive contamination of coastal waters. Major issues in characterizing the extent of the affected waters were a poor knowledge of the radiation released to the coastal waters and the rather complex coastal dynamics of the region, not deterministically captured by the available prediction systems. Equivalently, during the Gulf of Mexico Deep Water Horizon oil platform accident in April 2010, significant amounts of oil and gas were released from the ocean floor. For this case, issues in mapping and predicting the extent of the affected waters in real-time were a poor knowledge of the actual amounts of oil reaching the surface and the fact that coastal dynamics over the region were not deterministically captured by the available prediction systems. To assess the ocean regions and times that were most likely affected by these accidents while capturing the above sources of uncertainty, ensembles of the Navy Coastal Ocean Model (NCOM) were configured over the two regions (NE Japan and Northern Gulf of Mexico). For the Fukushima case tracers were released on each ensemble member; their locations at each instant provided reference positions of water volumes where the signature of water released from the plant could be found. For the Deep Water Horizon oil spill case each ensemble member was coupled with a diffusion-advection solution to estimate possible scenarios of oil concentrations using perturbed estimates of the released amounts as the source terms at the surface. Stochastic plumes were then defined using a Risk Assessment Code (RAC) analysis that associates a number from 1 to 5 to each grid point, determined by the likelihood of having tracer particle within short ranges (for the Fukushima case), hence defining the high risk areas and those recommended for monitoring. For the Oil Spill case the RAC codes were determined by the likelihood of reaching oil concentrations as defined in the Bonn Agreement Oil Appearance Code. The likelihoods were taken in both cases from probability distribution functions derived from the ensemble runs. Results were compared with a control-deterministic solution and checked against available reports to assess their skill in capturing the actual observed plumes and other in-situ data, as well as their relevance for planning surveys and reconnaissance flights for both cases.
Identification of human chromosome 22 transcribed sequences with ORF expressed sequence tags
de Souza, Sandro J.; Camargo, Anamaria A.; Briones, Marcelo R. S.; Costa, Fernando F.; Nagai, Maria Aparecida; Verjovski-Almeida, Sergio; Zago, Marco A.; Andrade, Luis Eduardo C.; Carrer, Helaine; El-Dorry, Hamza F. A.; Espreafico, Enilza M.; Habr-Gama, Angelita; Giannella-Neto, Daniel; Goldman, Gustavo H.; Gruber, Arthur; Hackel, Christine; Kimura, Edna T.; Maciel, Rui M. B.; Marie, Suely K. N.; Martins, Elizabeth A. L.; Nóbrega, Marina P.; Paçó-Larson, Maria Luisa; Pardini, Maria Inês M. C.; Pereira, Gonçalo G.; Pesquero, João Bosco; Rodrigues, Vanderlei; Rogatto, Silvia R.; da Silva, Ismael D. C. G.; Sogayar, Mari C.; de Fátima Sonati, Maria; Tajara, Eloiza H.; Valentini, Sandro R.; Acencio, Marcio; Alberto, Fernando L.; Amaral, Maria Elisabete J.; Aneas, Ivy; Bengtson, Mário Henrique; Carraro, Dirce M.; Carvalho, Alex F.; Carvalho, Lúcia Helena; Cerutti, Janete M.; Corrêa, Maria Lucia C.; Costa, Maria Cristina R.; Curcio, Cyntia; Gushiken, Tsieko; Ho, Paulo L.; Kimura, Elza; Leite, Luciana C. C.; Maia, Gustavo; Majumder, Paromita; Marins, Mozart; Matsukuma, Adriana; Melo, Analy S. A.; Mestriner, Carlos Alberto; Miracca, Elisabete C.; Miranda, Daniela C.; Nascimento, Ana Lucia T. O.; Nóbrega, Francisco G.; Ojopi, Élida P. B.; Pandolfi, José Rodrigo C.; Pessoa, Luciana Gilbert; Rahal, Paula; Rainho, Claudia A.; da Ro's, Nancy; de Sá, Renata G.; Sales, Magaly M.; da Silva, Neusa P.; Silva, Tereza C.; da Silva, Wilson; Simão, Daniel F.; Sousa, Josane F.; Stecconi, Daniella; Tsukumo, Fernando; Valente, Valéria; Zalcberg, Heloisa; Brentani, Ricardo R.; Reis, Luis F. L.; Dias-Neto, Emmanuel; Simpson, Andrew J. G.
2000-01-01
Transcribed sequences in the human genome can be identified with confidence only by alignment with sequences derived from cDNAs synthesized from naturally occurring mRNAs. We constructed a set of 250,000 cDNAs that represent partial expressed gene sequences and that are biased toward the central coding regions of the resulting transcripts. They are termed ORF expressed sequence tags (ORESTES). The 250,000 ORESTES were assembled into 81,429 contigs. Of these, 1,181 (1.45%) were found to match sequences in chromosome 22 with at least one ORESTES contig for 162 (65.6%) of the 247 known genes, for 67 (44.6%) of the 150 related genes, and for 45 of the 148 (30.4%) EST-predicted genes on this chromosome. Using a set of stringent criteria to validate our sequences, we identified a further 219 previously unannotated transcribed sequences on chromosome 22. Of these, 171 were in fact also defined by EST or full length cDNA sequences available in GenBank but not utilized in the initial annotation of the first human chromosome sequence. Thus despite representing less than 15% of all expressed human sequences in the public databases at the time of the present analysis, ORESTES sequences defined 48 transcribed sequences on chromosome 22 not defined by other sequences. All of the transcribed sequences defined by ORESTES coincided with DNA regions predicted as encoding exons by genscan. (http://genes.mit.edu/GENSCAN.html). PMID:11070084
Tools4miRs - one place to gather all the tools for miRNA analysis.
Lukasik, Anna; Wójcikowski, Maciej; Zielenkiewicz, Piotr
2016-09-01
MiRNAs are short, non-coding molecules that negatively regulate gene expression and thereby play several important roles in living organisms. Dozens of computational methods for miRNA-related research have been developed, which greatly differ in various aspects. The substantial availability of difficult-to-compare approaches makes it challenging for the user to select a proper tool and prompts the need for a solution that will collect and categorize all the methods. Here, we present tools4miRs, the first platform that gathers currently more than 160 methods for broadly defined miRNA analysis. The collected tools are classified into several general and more detailed categories in which the users can additionally filter the available methods according to their specific research needs, capabilities and preferences. Tools4miRs is also a web-based target prediction meta-server that incorporates user-designated target prediction methods into the analysis of user-provided data. Tools4miRs is implemented in Python using Django and is freely available at tools4mirs.org. piotr@ibb.waw.pl Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
NASA Astrophysics Data System (ADS)
Graham, Wendy D.; Neff, Christina R.
1994-05-01
The first-order analytical solution of the inverse problem for estimating spatially variable recharge and transmissivity under steady-state groundwater flow, developed in Part 1 is applied to the Upper Floridan Aquifer in NE Florida. Parameters characterizing the statistical structure of the log-transmissivity and head fields are estimated from 152 measurements of transmissivity and 146 measurements of hydraulic head available in the study region. Optimal estimates of the recharge, transmissivity and head fields are produced throughout the study region by conditioning on the nearest 10 available transmissivity measurements and the nearest 10 available head measurements. Head observations are shown to provide valuable information for estimating both the transmissivity and the recharge fields. Accurate numerical groundwater model predictions of the aquifer flow system are obtained using the optimal transmissivity and recharge fields as input parameters, and the optimal head field to define boundary conditions. For this case study, both the transmissivity field and the uncertainty of the transmissivity field prediction are poorly estimated, when the effects of random recharge are neglected.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alves, Vinicius M.; Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599; Muratov, Eugene
Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putativemore » sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using Random Forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers was 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR Toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the Scorecard database of possible skin or sense organ toxicants as primary candidates for experimental validation. - Highlights: • It was compiled the largest publicly-available skin sensitization dataset. • Predictive QSAR models were developed for skin sensitization. • Developed models have higher prediction accuracy than OECD QSAR Toolbox. • Putative chemical hazards in the Scorecard database were found using our models.« less
On the Conditioning of Machine-Learning-Assisted Turbulence Modeling
NASA Astrophysics Data System (ADS)
Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng
2017-11-01
Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.
SinEx DB: a database for single exon coding sequences in mammalian genomes.
Jorquera, Roddy; Ortiz, Rodrigo; Ossandon, F; Cárdenas, Juan Pablo; Sepúlveda, Rene; González, Carolina; Holmes, David S
2016-01-01
Eukaryotic genes are typically interrupted by intragenic, noncoding sequences termed introns. However, some genes lack introns in their coding sequence (CDS) and are generally known as 'single exon genes' (SEGs). In this work, a SEG is defined as a nuclear, protein-coding gene that lacks introns in its CDS. Whereas, many public databases of Eukaryotic multi-exon genes are available, there are only two specialized databases for SEGs. The present work addresses the need for a more extensive and diverse database by creating SinEx DB, a publicly available, searchable database of predicted SEGs from 10 completely sequenced mammalian genomes including human. SinEx DB houses the DNA and protein sequence information of these SEGs and includes their functional predictions (KOG) and the relative distribution of these functions within species. The information is stored in a relational database built with My SQL Server 5.1.33 and the complete dataset of SEG sequences and their functional predictions are available for downloading. SinEx DB can be interrogated by: (i) a browsable phylogenetic schema, (ii) carrying out BLAST searches to the in-house SinEx DB of SEGs and (iii) via an advanced search mode in which the database can be searched by key words and any combination of searches by species and predicted functions. SinEx DB provides a rich source of information for advancing our understanding of the evolution and function of SEGs.Database URL: www.sinex.cl. © The Author(s) 2016. Published by Oxford University Press.
Hidden markov model for the prediction of transmembrane proteins using MATLAB.
Chaturvedi, Navaneet; Shanker, Sudhanshu; Singh, Vinay Kumar; Sinha, Dhiraj; Pandey, Paras Nath
2011-01-01
Since membranous proteins play a key role in drug targeting therefore transmembrane proteins prediction is active and challenging area of biological sciences. Location based prediction of transmembrane proteins are significant for functional annotation of protein sequences. Hidden markov model based method was widely applied for transmembrane topology prediction. Here we have presented a revised and a better understanding model than an existing one for transmembrane protein prediction. Scripting on MATLAB was built and compiled for parameter estimation of model and applied this model on amino acid sequence to know the transmembrane and its adjacent locations. Estimated model of transmembrane topology was based on TMHMM model architecture. Only 7 super states are defined in the given dataset, which were converted to 96 states on the basis of their length in sequence. Accuracy of the prediction of model was observed about 74 %, is a good enough in the area of transmembrane topology prediction. Therefore we have concluded the hidden markov model plays crucial role in transmembrane helices prediction on MATLAB platform and it could also be useful for drug discovery strategy. The database is available for free at bioinfonavneet@gmail.comvinaysingh@bhu.ac.in.
BIG DATA ANALYTICS AND PRECISION ANIMAL AGRICULTURE SYMPOSIUM: Data to decisions.
White, B J; Amrine, D E; Larson, R L
2018-04-14
Big data are frequently used in many facets of business and agronomy to enhance knowledge needed to improve operational decisions. Livestock operations collect data of sufficient quantity to perform predictive analytics. Predictive analytics can be defined as a methodology and suite of data evaluation techniques to generate a prediction for specific target outcomes. The objective of this manuscript is to describe the process of using big data and the predictive analytic framework to create tools to drive decisions in livestock production, health, and welfare. The predictive analytic process involves selecting a target variable, managing the data, partitioning the data, then creating algorithms, refining algorithms, and finally comparing accuracy of the created classifiers. The partitioning of the datasets allows model building and refining to occur prior to testing the predictive accuracy of the model with naive data to evaluate overall accuracy. Many different classification algorithms are available for predictive use and testing multiple algorithms can lead to optimal results. Application of a systematic process for predictive analytics using data that is currently collected or that could be collected on livestock operations will facilitate precision animal management through enhanced livestock operational decisions.
Factors predicting the success of trabeculectomy bleb enhancement with needling.
Than, Jonathan Y-X L; Al-Mugheiry, Toby S; Gale, Jesse; Martin, Keith R
2018-02-09
Bleb needling is widely used to restore flow and lower intraocular pressure (IOP) in a failing trabeculectomy. We aimed to measure the safety and efficacy of needling in a large cohort and identify factors that were associated with success and failure. This retrospective audit included all patients who underwent needling at Addenbrooke's Hospital, Cambridge over a 10-year period. Data were available on 91 patients (98% of patients identified), including 191 needlings on 96 eyes. Success was defined as IOP below 21 mm Hg or 16 mm Hg or 13 mm Hg consistently, without reoperation or glaucoma medication. Risk factors for failure were assessed by Cox proportional hazard regression and Kaplan-Meier curves. Success defined as IOP <16 mm Hg was 66.6% at 12 months and 53% at 3 years and success defined as IOP <21 mm Hg was 77.1% at 12 months and 73.1% at 3 years. Failure after needling was most common in the first 6 months. Factors that predicted failure were flat or fibrotic blebs (non-functional) and no longer injected, while success was predicted by achieving a low IOP immediately after needling. No significant complications were identified. Needling was most successful soon after trabeculectomy, but resuscitation of a long-failed trabeculectomy had lower likelihood of success. The safety and efficacy compare favourably with alternative treatment approaches. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Svensson, Fredrik; Aniceto, Natalia; Norinder, Ulf; Cortes-Ciriano, Isidro; Spjuth, Ola; Carlsson, Lars; Bender, Andreas
2018-05-29
Making predictions with an associated confidence is highly desirable as it facilitates decision making and resource prioritization. Conformal regression is a machine learning framework that allows the user to define the required confidence and delivers predictions that are guaranteed to be correct to the selected extent. In this study, we apply conformal regression to model molecular properties and bioactivity values and investigate different ways to scale the resultant prediction intervals to create as efficient (i.e., narrow) regressors as possible. Different algorithms to estimate the prediction uncertainty were used to normalize the prediction ranges, and the different approaches were evaluated on 29 publicly available data sets. Our results show that the most efficient conformal regressors are obtained when using the natural exponential of the ensemble standard deviation from the underlying random forest to scale the prediction intervals, but other approaches were almost as efficient. This approach afforded an average prediction range of 1.65 pIC50 units at the 80% confidence level when applied to bioactivity modeling. The choice of nonconformity function has a pronounced impact on the average prediction range with a difference of close to one log unit in bioactivity between the tightest and widest prediction range. Overall, conformal regression is a robust approach to generate bioactivity predictions with associated confidence.
Dang, Mia; Ramsaran, Kalinda D.; Street, Melissa E.; Syed, S. Noreen; Barclay-Goddard, Ruth; Miller, Patricia A.
2011-01-01
ABSTRACT Purpose: To estimate the predictive accuracy and clinical usefulness of the Chedoke–McMaster Stroke Assessment (CMSA) predictive equations. Method: A longitudinal prognostic study using historical data obtained from 104 patients admitted post cerebrovascular accident was undertaken. Data were abstracted for all patients undergoing rehabilitation post stroke who also had documented admission and discharge CMSA scores. Published predictive equations were used to determine predicted outcomes. To determine the accuracy and clinical usefulness of the predictive model, shrinkage coefficients and predictions with 95% confidence bands were calculated. Results: Complete data were available for 74 patients with a mean age of 65.3±12.4 years. The shrinkage values for the six Impairment Inventory (II) dimensions varied from −0.05 to 0.09; the shrinkage value for the Activity Inventory (AI) was 0.21. The error associated with predictive values was greater than ±1.5 stages for the II dimensions and greater than ±24 points for the AI. Conclusions: This study shows that the large error associated with the predictions (as defined by the confidence band) for the CMSA II and AI limits their clinical usefulness as a predictive measure. Further research to establish predictive models using alternative statistical procedures is warranted. PMID:22654239
Amis, Gregory P; Carpenter, Gail A
2010-03-01
Computational models of learning typically train on labeled input patterns (supervised learning), unlabeled input patterns (unsupervised learning), or a combination of the two (semi-supervised learning). In each case input patterns have a fixed number of features throughout training and testing. Human and machine learning contexts present additional opportunities for expanding incomplete knowledge from formal training, via self-directed learning that incorporates features not previously experienced. This article defines a new self-supervised learning paradigm to address these richer learning contexts, introducing a neural network called self-supervised ARTMAP. Self-supervised learning integrates knowledge from a teacher (labeled patterns with some features), knowledge from the environment (unlabeled patterns with more features), and knowledge from internal model activation (self-labeled patterns). Self-supervised ARTMAP learns about novel features from unlabeled patterns without destroying partial knowledge previously acquired from labeled patterns. A category selection function bases system predictions on known features, and distributed network activation scales unlabeled learning to prediction confidence. Slow distributed learning on unlabeled patterns focuses on novel features and confident predictions, defining classification boundaries that were ambiguous in the labeled patterns. Self-supervised ARTMAP improves test accuracy on illustrative low-dimensional problems and on high-dimensional benchmarks. Model code and benchmark data are available from: http://techlab.eu.edu/SSART/. Copyright 2009 Elsevier Ltd. All rights reserved.
Assessment of cancer and virus antigens for cross-reactivity in human tissues.
Jaravine, Victor; Raffegerst, Silke; Schendel, Dolores J; Frishman, Dmitrij
2017-01-01
Cross-reactivity (CR) or invocation of autoimmune side effects in various tissues has important safety implications in adoptive immunotherapy directed against selected antigens. The ability to predict CR (on-target and off-target toxicities) may help in the early selection of safer therapeutically relevant target antigens. We developed a methodology for the calculation of quantitative CR for any defined peptide epitope. Using this approach, we performed assessment of 4 groups of 283 currently known human MHC-class-I epitopes including differentiation antigens, overexpressed proteins, cancer-testis antigens and mutations displayed by tumor cells. In addition, 89 epitopes originating from viral sources were investigated. The natural occurrence of these epitopes in human tissues was assessed based on proteomics abundance data, while the probability of their presentation by MHC-class-I molecules was modelled by the method of Keşmir et al. which combines proteasomal cleavage, TAP affinity and MHC-binding predictions. The results of these analyses for many previously defined peptides are presented as CR indices and tissue profiles. The methodology thus allows for quantitative comparisons of epitopes and is suggested to be suited for the assessment of epitopes of candidate antigens in an early stage of development of adoptive immunotherapy. Our method is implemented as a Java program, with curated datasets stored in a MySQL database. It predicts all naturally possible self-antigens for a given sequence of a therapeutic antigen (or epitope) and after filtering for predicted immunogenicity outputs results as an index and profile of CR to the self-antigens in 22 human tissues. The program is implemented as part of the iCrossR webserver, which is publicly available at http://webclu.bio.wzw.tum.de/icrossr/ CONTACT: d.frishman@wzw.tum.deSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Sánchez-Miguel, C; Crilly, J; Grant, J; Mee, J F
2018-06-01
The objective of this study was to determine the diagnostic value of maternal serology for the diagnosis of Salmonella Dublin bovine abortion and stillbirth. A retrospective, unmatched, case-control study was carried out using twenty year's data (1989-2009) from bovine foetal submissions to an Irish government veterinary laboratory. Cases (n = 214) were defined as submissions with a S. Dublin culture-positive foetus from a S. Dublin unvaccinated dam where results of maternal S. Dublin serology were available. Controls (n = 415) were defined as submissions where an alternative diagnosis other than S. Dublin was made in a foetus from an S. Dublin unvaccinated dam where the results of maternal S. Dublin serology were available. A logistic regression model was fitted to the data: the dichotomous dependent variable was the S. Dublin foetal culture result, and the independent variables were the maternal serum agglutination test (SAT) titre results. Salmonella serology correctly classified 87% of S. Dublin culture-positive foetuses at a predicted probability threshold of 0.44 (cut-off at which sensitivity and specificity are at a maximum, J = 0.67). The sensitivity of the SAT at the same threshold was 73.8% (95% CI: 67.4%-79.5%), and the specificity was 93.2% (95% CI: 90.3%-95.4%). The positive and negative predictive values were 84.9% (95% CI: 79.3%-88.6%) and 87.3% (95% CI: 83.5%-91.3%), respectively. This study illustrates that the use of predicted probability values, rather than the traditional arbitrary breakpoints of negative, inconclusive and positive, increases the diagnostic value of the maternal SAT. Veterinary laboratory diagnosticians and veterinary practitioners can recover from the test results, information previously categorized, particularly from those results declared to be inconclusive. © 2017 Blackwell Verlag GmbH.
Phillips, Robert S; Sung, Lillian; Amman, Roland A; Riley, Richard D; Castagnola, Elio; Haeusler, Gabrielle M; Klaassen, Robert; Tissing, Wim J E; Lehrnbecher, Thomas; Chisholm, Julia; Hakim, Hana; Ranasinghe, Neil; Paesmans, Marianne; Hann, Ian M; Stewart, Lesley A
2016-01-01
Background: Risk-stratified management of fever with neutropenia (FN), allows intensive management of high-risk cases and early discharge of low-risk cases. No single, internationally validated, prediction model of the risk of adverse outcomes exists for children and young people. An individual patient data (IPD) meta-analysis was undertaken to devise one. Methods: The ‘Predicting Infectious Complications in Children with Cancer' (PICNICC) collaboration was formed by parent representatives, international clinical and methodological experts. Univariable and multivariable analyses, using random effects logistic regression, were undertaken to derive and internally validate a risk-prediction model for outcomes of episodes of FN based on clinical and laboratory data at presentation. Results: Data came from 22 different study groups from 15 countries, of 5127 episodes of FN in 3504 patients. There were 1070 episodes in 616 patients from seven studies available for multivariable analysis. Univariable analyses showed associations with microbiologically defined infection (MDI) in many items, including higher temperature, lower white cell counts and acute myeloid leukaemia, but not age. Patients with osteosarcoma/Ewings sarcoma and those with more severe mucositis were associated with a decreased risk of MDI. The predictive model included: malignancy type, temperature, clinically ‘severely unwell', haemoglobin, white cell count and absolute monocyte count. It showed moderate discrimination (AUROC 0.723, 95% confidence interval 0.711–0.759) and good calibration (calibration slope 0.95). The model was robust to bootstrap and cross-validation sensitivity analyses. Conclusions: This new prediction model for risk of MDI appears accurate. It requires prospective studies assessing implementation to assist clinicians and parents/patients in individualised decision making. PMID:26954719
Blackburn, Jason K; McNyset, Kristina M; Curtis, Andrew; Hugh-Jones, Martin E
2007-12-01
The ecology and distribution of Bacillus anthracis is poorly understood despite continued anthrax outbreaks in wildlife and livestock throughout the United States. Little work is available to define the potential environments that may lead to prolonged spore survival and subsequent outbreaks. This study used the genetic algorithm for rule-set prediction modeling system to model the ecological niche for B. anthracis in the contiguous United States using wildlife and livestock outbreaks and several environmental variables. The modeled niche is defined by a narrow range of normalized difference vegetation index, precipitation, and elevation, with the geographic distribution heavily concentrated in a narrow corridor from southwest Texas northward into the Dakotas and Minnesota. Because disease control programs rely on vaccination and carcass disposal, and vaccination in wildlife remains untenable, understanding the distribution of B. anthracis plays an important role in efforts to prevent/eradicate the disease. Likewise, these results potentially aid in differentiating endemic/natural outbreaks from industrial-contamination related outbreaks or bioterrorist attacks.
Eronen, Lauri; Toivonen, Hannu
2012-06-06
Biological databases contain large amounts of data concerning the functions and associations of genes and proteins. Integration of data from several such databases into a single repository can aid the discovery of previously unknown connections spanning multiple types of relationships and databases. Biomine is a system that integrates cross-references from several biological databases into a graph model with multiple types of edges, such as protein interactions, gene-disease associations and gene ontology annotations. Edges are weighted based on their type, reliability, and informativeness. We present Biomine and evaluate its performance in link prediction, where the goal is to predict pairs of nodes that will be connected in the future, based on current data. In particular, we formulate protein interaction prediction and disease gene prioritization tasks as instances of link prediction. The predictions are based on a proximity measure computed on the integrated graph. We consider and experiment with several such measures, and perform a parameter optimization procedure where different edge types are weighted to optimize link prediction accuracy. We also propose a novel method for disease-gene prioritization, defined as finding a subset of candidate genes that cluster together in the graph. We experimentally evaluate Biomine by predicting future annotations in the source databases and prioritizing lists of putative disease genes. The experimental results show that Biomine has strong potential for predicting links when a set of selected candidate links is available. The predictions obtained using the entire Biomine dataset are shown to clearly outperform ones obtained using any single source of data alone, when different types of links are suitably weighted. In the gene prioritization task, an established reference set of disease-associated genes is useful, but the results show that under favorable conditions, Biomine can also perform well when no such information is available.The Biomine system is a proof of concept. Its current version contains 1.1 million entities and 8.1 million relations between them, with focus on human genetics. Some of its functionalities are available in a public query interface at http://biomine.cs.helsinki.fi, allowing searching for and visualizing connections between given biological entities.
Kha, Hung; Tuble, Sigrid C; Kalyanasundaram, Shankar; Williamson, Richard E
2010-02-01
We understand few details about how the arrangement and interactions of cell wall polymers produce the mechanical properties of primary cell walls. Consequently, we cannot quantitatively assess if proposed wall structures are mechanically reasonable or assess the effectiveness of proposed mechanisms to change mechanical properties. As a step to remedying this, we developed WallGen, a Fortran program (available on request) building virtual cellulose-hemicellulose networks by stochastic self-assembly whose mechanical properties can be predicted by finite element analysis. The thousands of mechanical elements in the virtual wall are intended to have one-to-one spatial and mechanical correspondence with their real wall counterparts of cellulose microfibrils and hemicellulose chains. User-defined inputs set the properties of the two polymer types (elastic moduli, dimensions of microfibrils and hemicellulose chains, hemicellulose molecular weight) and their population properties (microfibril alignment and volume fraction, polymer weight percentages in the network). This allows exploration of the mechanical consequences of variations in nanostructure that might occur in vivo and provides estimates of how uncertainties regarding certain inputs will affect WallGen's mechanical predictions. We summarize WallGen's operation and the choice of values for user-defined inputs and show that predicted values for the elastic moduli of multinet walls subject to small displacements overlap measured values. "Design of experiment" methods provide systematic exploration of how changed input values affect mechanical properties and suggest that changing microfibril orientation and/or the number of hemicellulose cross-bridges could change wall mechanical anisotropy.
Consensus QSAR model for identifying novel H5N1 inhibitors.
Sharma, Nitin; Yap, Chun Wei
2012-08-01
Due to the importance of neuraminidase in the pathogenesis of influenza virus infection, it has been regarded as the most important drug target for the treatment of influenza. Resistance to currently available drugs and new findings related to structure of the protein requires novel neuraminidase 1 (N1) inhibitors. In this study, a consensus QSAR model with defined applicability domain (AD) was developed using published N1 inhibitors. The consensus model was validated using an external validation set. The model achieved high sensitivity, specificity, and overall accuracy along with low false positive rate (FPR) and false discovery rate (FDR). The performance of model on the external validation set and training set were comparable, thus it was unlikely to be overfitted. The low FPR and low FDR will increase its accuracy in screening large chemical libraries. Screening of ZINC library resulted in 64,772 compounds as probable N1 inhibitors, while 173,674 compounds were defined to be outside the AD of the consensus model. The advantage of the current model is that it was developed using a large and diverse dataset and has a defined AD which prevents its use on compounds that it is not capable of predicting. The consensus model developed in this study is made available via the free software, PaDEL-DDPredictor.
Yamada, Takashi; Tanaka, Yushiro; Hasegawa, Ryuichi; Sakuratani, Yuki; Yamazoe, Yasushi; Ono, Atsushi; Hirose, Akihiko; Hayashi, Makoto
2014-12-01
We propose a category approach to assessing the testicular toxicity of chemicals with a similar structure to ethylene glycol methyl ether (EGME). Based on toxicity information for EGME and related chemicals and accompanied by adverse outcome pathway information on the testicular toxicity of EGME, this category was defined as chemicals that are metabolized to methoxy- or ethoxyacetic acid, a substance responsible for testicular toxicity. A Japanese chemical inventory was screened using the Hazard Evaluation Support System, which we have developed to support a category approach for predicting the repeated-dose toxicity of chemical substances. Quantitative metabolic information on the related chemicals was then considered, and seventeen chemicals were finally obtained from the inventory as a shortlist for the category. Available data in the literature shows that chemicals for which information is available on the metabolic formation of EGME, ethylene glycol ethyl ether, methoxy- or ethoxyacetic acid do in fact possess testicular toxicity, suggesting that testicular toxicity is a concern, due to metabolic activation, for the remaining chemicals. Our results clearly demonstrate practical utility of AOP-based category approach for predicting repeated-dose toxicity of chemicals. Copyright © 2014 Elsevier Inc. All rights reserved.
Automated adaptive inference of phenomenological dynamical models.
Daniels, Bryan C; Nemenman, Ilya
2015-08-21
Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.
Redundancy and reduction: Speakers manage syntactic information density
Florian Jaeger, T.
2010-01-01
A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel logit model analysis of naturally distributed data from a corpus of spontaneous speech is used to assess the effect of information density on complementizer that-mentioning, while simultaneously evaluating the predictions of several influential alternative accounts: availability, ambiguity avoidance, and dependency processing accounts. Information density emerges as an important predictor of speakers’ preferences during production. As information is defined in terms of probabilities, it follows that production is probability-sensitive, in that speakers’ preferences are affected by the contextual probability of syntactic structures. The merits of a corpus-based approach to the study of language production are discussed as well. PMID:20434141
Automated adaptive inference of phenomenological dynamical models
Daniels, Bryan C.; Nemenman, Ilya
2015-01-01
Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deur, Alexandre; Shen, Jian -Ming; Wu, Xing -Gang
The Principle of Maximum Conformality (PMC) provides scale-fixed perturbative QCD predictions which are independent of the choice of the renormalization scheme, as well as the choice of the initial renormalization scale. In this article, we will test the PMC by comparing its predictions for the strong couplingmore » $$\\alpha^s_{g_1}(Q)$$, defined from the Bjorken sum rule, with predictions using conventional pQCD scale-setting. The two results are found to be compatible with each other and with the available experimental data. However, the PMC provides a significantly more precise determination, although its domain of applicability ($$Q \\gtrsim 1.5$$ GeV) does not extend to as small values of momentum transfer as that of a conventional pQCD analysis ($$Q \\gtrsim 1$$ GeV). In conclusion, we suggest that the PMC range of applicability could be improved by a modified intermediate scheme choice or using a single effective PMC scale.« less
Supersonic jet noise - Its generation, prediction and effects on people and structures
NASA Technical Reports Server (NTRS)
Preisser, J. S.; Golub, R. A.; Seiner, J. M.; Powell, C. A.
1990-01-01
This paper presents the results of a study aimed at quantifying the effects of jet source noise reduction, increases in aircraft lift, and reduced aircraft thrust on the take-off noise associated with supersonic civil transports. Supersonic jet noise sources are first described, and their frequency and directivity dependence are defined. The study utilizes NASA's Aircraft Noise Prediction Program in a parametric study to weigh the relative benefits of several approaches to low noise. The baseline aircraft concept used in these predictions is the AST-205-1 powered by GE21/J11-B14A scaled engines. Noise assessment is presented in terms of effective perceived noise levels at the FAA's centerline and sideline measuring locations for current subsonic aircraft, and in terms of audiologically perceived sound of people and other indirect effects. The results show that significant noise benefit can be achieved through proper understanding and utilization of all available approaches.
Trophic state, eutrophication and nutrient criteria in streams.
Dodds, Walter K
2007-12-01
Trophic state is the property of energy availability to the food web and defines the foundation of community integrity and ecosystem function. Describing trophic state in streams requires a stoichiometric (nutrient ratio) approach because carbon input rates are linked to nitrogen and phosphorus supply rates. Light determines the source of carbon. Cross system analyses, small experiments and ecosystem level manipulations have recently advanced knowledge about these linkages, but not to the point of building complex predictive models that predict all effects of nutrient pollution. Species diversity could indicate the natural distribution of stream trophic status over evolutionary time scales. Delineation of factors that control trophic state and relationships with biological community properties allows determination of goals for management of stream biotic integrity.
Upper Stage Tank Thermodynamic Modeling Using SINDA/FLUINT
NASA Technical Reports Server (NTRS)
Schallhorn, Paul; Campbell, D. Michael; Chase, Sukhdeep; Piquero, Jorge; Fortenberry, Cindy; Li, Xiaoyi; Grob, Lisa
2006-01-01
Modeling to predict the condition of cryogenic propellants in an upper stage of a launch vehicle is necessary for mission planning and successful execution. Traditionally, this effort was performed using custom, in-house proprietary codes, limiting accessibility and application. Phenomena responsible for influencing the thermodynamic state of the propellant have been characterized as distinct events whose sequence defines a mission. These events include thermal stratification, passive thermal control roll (rotation), slosh, and engine firing. This paper demonstrates the use of an off the shelf, commercially available, thermal/fluid-network code to predict the thermodynamic state of propellant during the coast phase between engine firings, i.e. the first three of the above identified events. Results of this effort will also be presented.
Tuning relaxation dynamics and mechanical properties of polymer films of identical thickness
NASA Astrophysics Data System (ADS)
Kchaou, Marwa; Alcouffe, Pierre; Chandran, Sivasurender; Cassagnau, Philippe; Reiter, Günter; Al Akhrass, Samer
2018-03-01
Using dewetting as a characterization tool, we demonstrate that physical properties of thin polymer films can be regulated and tuned by employing variable processing conditions. For different molecular weights, the variable behavior of polystyrene films of identical thickness, prepared along systematically altered pathways, became predictable through a single parameter P , defined as the ratio of time required over time available for the equilibration of polymers. In particular, preparation-induced residual stresses, the corresponding relaxation times as well as the rupture probability of such films (of identical thickness) varied by orders of magnitude following scaling relations with P . Our experimental findings suggest that we can predictably enhance properties and hence maximize the performance of thin polymer films via appropriately chosen processing conditions.
Sammour, T; Cohen, L; Karunatillake, A I; Lewis, M; Lawrence, M J; Hunter, A; Moore, J W; Thomas, M L
2017-11-01
Recently published data support the use of a web-based risk calculator ( www.anastomoticleak.com ) for the prediction of anastomotic leak after colectomy. The aim of this study was to externally validate this calculator on a larger dataset. Consecutive adult patients undergoing elective or emergency colectomy for colon cancer at a single institution over a 9-year period were identified using the Binational Colorectal Cancer Audit database. Patients with a rectosigmoid cancer, an R2 resection, or a diverting ostomy were excluded. The primary outcome was anastomotic leak within 90 days as defined by previously published criteria. Area under receiver operating characteristic curve (AUROC) was derived and compared with that of the American College of Surgeons National Surgical Quality Improvement Program ® (ACS NSQIP) calculator and the colon leakage score (CLS) calculator for left colectomy. Commercially available artificial intelligence-based analytics software was used to further interrogate the prediction algorithm. A total of 626 patients were identified. Four hundred and fifty-six patients met the inclusion criteria, and 402 had complete data available for all the calculator variables (126 had a left colectomy). Laparoscopic surgery was performed in 39.6% and emergency surgery in 14.7%. The anastomotic leak rate was 7.2%, with 31.0% requiring reoperation. The anastomoticleak.com calculator was significantly predictive of leak and performed better than the ACS NSQIP calculator (AUROC 0.73 vs 0.58) and the CLS calculator (AUROC 0.96 vs 0.80) for left colectomy. Artificial intelligence-predictive analysis supported these findings and identified an improved prediction model. The anastomotic leak risk calculator is significantly predictive of anastomotic leak after colon cancer resection. Wider investigation of artificial intelligence-based analytics for risk prediction is warranted.
Defect specific maintenance of SG tubes -- How safe is it?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cizelj, L.; Mavko, B.; Dvorsek, T.
1997-02-01
The efficiency of the defect specific plugging criterion for outside diameter stress corrosion cracking at tube support plates is assessed. The efficiency is defined by three parameters: (1) number of plugged tubes, (2) probability of steam generator tube rupture and (3) predicted accidental leak rate through the defects. A probabilistic model is proposed to quantify the probability of tube rupture, while procedures available in literature were used to define the accidental leak rates. The defect specific plugging criterion was then compared to the performance of traditional (45%) plugging criterion using realistic data from Krsko nuclear power plant. Advantages of themore » defect specific approach over the traditional one are clearly shown. Some hints on the optimization of safe life of steam generator are also given.« less
CHEMICAL PRIORITIZATION FOR DEVELOPMENTAL ...
Defining a predictive model of developmental toxicity from in vitro and high-throughput screening (HTS) assays can be limited by the availability of developmental defects data. ToxRefDB (www.epa.gov/ncct/todrefdb) was built from animal studies on data-rich environmental chemicals, and has been used as an anchor for predictive modeling of ToxCast™ data. Scaling to thousands of untested chemicals requires another approach. ToxPlorer™ was developed as a tool to query and extract specific facts about defined biological entities from the open scientific literature and to coherently synthesize relevant knowledge about relationships, pathways and processes in toxicity. Here, we investigated the specific application of ToxPlorer to weighting HTS assay targets for relevance to developmental defects as defined in the literature. First, we systemically analyzed 88,193 Pubmed abstracts selected by bulk query using harmonized terminology for 862 developmental endpoints (www.devtox.net) and 364,334 dictionary term entities in our VT-KB (virtual tissues knowledgebase). We specifically focused on entities corresponding to genes/proteins mapped across of >500 ToxCast HTS assays. The 88,193 devtox abstracts mentioned 244 gene/protein entities in an aggregated total of ~8,000 occurrences. Each of the 244 assays was scored and weighted by the number of devtox articles and relevance to developmental processes. This score was used as a feature for chemical prioritization by Toxic
NASA Technical Reports Server (NTRS)
Keenan, Alexandra; Young, Millennia; Saile, Lynn; Boley, Lynn; Walton, Marlei; Kerstman, Eric; Shah, Ronak; Goodenow, Debra A.; Myers, Jerry G.
2015-01-01
The Integrated Medical Model (IMM) is a probabilistic model that uses simulation to predict mission medical risk. Given a specific mission and crew scenario, medical events are simulated using Monte Carlo methodology to provide estimates of resource utilization, probability of evacuation, probability of loss of crew, and the amount of mission time lost due to illness. Mission and crew scenarios are defined by mission length, extravehicular activity (EVA) schedule, and crew characteristics including: sex, coronary artery calcium score, contacts, dental crowns, history of abdominal surgery, and EVA eligibility. The Integrated Medical Evidence Database (iMED) houses the model inputs for one hundred medical conditions using in-flight, analog, and terrestrial medical data. Inputs include incidence, event durations, resource utilization, and crew functional impairment. Severity of conditions is addressed by defining statistical distributions on the dichotomized best and worst-case scenarios for each condition. The outcome distributions for conditions are bounded by the treatment extremes of the fully treated scenario in which all required resources are available and the untreated scenario in which no required resources are available. Upon occurrence of a simulated medical event, treatment availability is assessed, and outcomes are generated depending on the status of the affected crewmember at the time of onset, including any pre-existing functional impairments or ongoing treatment of concurrent conditions. The main IMM outcomes, including probability of evacuation and loss of crew life, time lost due to medical events, and resource utilization, are useful in informing mission planning decisions. To date, the IMM has been used to assess mission-specific risks with and without certain crewmember characteristics, to determine the impact of eliminating certain resources from the mission medical kit, and to design medical kits that maximally benefit crew health while meeting mass and volume constraints.
The Integrated Medical Model: A Probabilistic Simulation Model Predicting In-Flight Medical Risks
NASA Technical Reports Server (NTRS)
Keenan, Alexandra; Young, Millennia; Saile, Lynn; Boley, Lynn; Walton, Marlei; Kerstman, Eric; Shah, Ronak; Goodenow, Debra A.; Myers, Jerry G., Jr.
2015-01-01
The Integrated Medical Model (IMM) is a probabilistic model that uses simulation to predict mission medical risk. Given a specific mission and crew scenario, medical events are simulated using Monte Carlo methodology to provide estimates of resource utilization, probability of evacuation, probability of loss of crew, and the amount of mission time lost due to illness. Mission and crew scenarios are defined by mission length, extravehicular activity (EVA) schedule, and crew characteristics including: sex, coronary artery calcium score, contacts, dental crowns, history of abdominal surgery, and EVA eligibility. The Integrated Medical Evidence Database (iMED) houses the model inputs for one hundred medical conditions using in-flight, analog, and terrestrial medical data. Inputs include incidence, event durations, resource utilization, and crew functional impairment. Severity of conditions is addressed by defining statistical distributions on the dichotomized best and worst-case scenarios for each condition. The outcome distributions for conditions are bounded by the treatment extremes of the fully treated scenario in which all required resources are available and the untreated scenario in which no required resources are available. Upon occurrence of a simulated medical event, treatment availability is assessed, and outcomes are generated depending on the status of the affected crewmember at the time of onset, including any pre-existing functional impairments or ongoing treatment of concurrent conditions. The main IMM outcomes, including probability of evacuation and loss of crew life, time lost due to medical events, and resource utilization, are useful in informing mission planning decisions. To date, the IMM has been used to assess mission-specific risks with and without certain crewmember characteristics, to determine the impact of eliminating certain resources from the mission medical kit, and to design medical kits that maximally benefit crew health while meeting mass and volume constraints.
Models for the indices of thermal comfort
Adrian, Streinu-Cercel; Sergiu, Costoiu; Maria, Mârza; Anca, Streinu-Cercel; Monica, Mârza
2008-01-01
The current paper propose the analysis and extension formulation required for establishing decision in the management of the medical national system from the point of view of quality and efficiency such as: conceiving models for the indices of thermal comfort, defining the predicted mean vote (on the thermal sensation scale) „PMV”, defining the metabolism „M”, heat transfer between the human body and the environment, defining the predicted percent of dissatisfied people „PPD”, defining all indices of thermal comfort. PMID:20108461
Somers, Jeffrey T.; Newby, Nathaniel; Lawrence, Charles; DeWeese, Richard; Moorcroft, David; Phelps, Shean
2014-01-01
The objective of this study was to investigate new methods for predicting injury from expected spaceflight dynamic loads by leveraging a broader range of available information in injury biomechanics. Although all spacecraft designs were considered, the primary focus was the National Aeronautics and Space Administration Orion capsule, as the authors have the most knowledge and experience related to this design. The team defined a list of critical injuries and selected the THOR anthropomorphic test device as the basis for new standards and requirements. In addition, the team down-selected the list of available injury metrics to the following: head injury criteria 15, kinematic brain rotational injury criteria, neck axial tension and compression force, maximum chest deflection, lateral shoulder force and displacement, acetabular lateral force, thoracic spine axial compression force, ankle moments, and average distal forearm speed limits. The team felt that these metrics capture all of the injuries that might be expected by a seated crewmember during vehicle aborts and landings. Using previously determined injury risk levels for nominal and off-nominal landings, appropriate injury assessment reference values (IARVs) were defined for each metric. Musculoskeletal deconditioning due to exposure to reduced gravity over time can affect injury risk during landing; therefore a deconditioning factor was applied to all IARVs. Although there are appropriate injury data for each anatomical region of interest, additional research is needed for several metrics to improve the confidence score. PMID:25152879
Prediction of infarction volume and infarction growth rate in acute ischemic stroke.
Kamran, Saadat; Akhtar, Naveed; Alboudi, Ayman; Kamran, Kainat; Ahmad, Arsalan; Inshasi, Jihad; Salam, Abdul; Shuaib, Ashfaq; Qidwai, Uvais
2017-08-08
The prediction of infarction volume after stroke onset depends on the shape of the growth dynamics of the infarction. To understand growth patterns that predict lesion volume changes, we studied currently available models described in literature and compared the models with Adaptive Neuro-Fuzzy Inference System [ANFIS], a method previously unused in the prediction of infarction growth and infarction volume (IV). We included 67 patients with malignant middle cerebral artery [MMCA] stroke who underwent decompressive hemicraniectomy. All patients had at least three cranial CT scans prior to the surgery. The rate of growth and volume of infarction measured on the third CT was predicted with ANFIS without statistically significant difference compared to the ground truth [P = 0.489]. This was not possible with linear, logarithmic or exponential methods. ANFIS was able to predict infarction volume [IV3] over a wide range of volume [163.7-600 cm 3 ] and time [22-110 hours]. The cross correlation [CRR] indicated similarity between the ANFIS-predicted IV3 and original data of 82% for ANFIS, followed by logarithmic 70%, exponential 63% and linear 48% respectively. Our study shows that ANFIS is superior to previously defined methods in the prediction of infarction growth rate (IGR) with reasonable accuracy, over wide time and volume range.
Alves, Vinicius M.; Muratov, Eugene; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander
2015-01-01
Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putative sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using random forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers were 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the ScoreCard database of possible skin or sense organ toxicants as primary candidates for experimental validation. PMID:25560674
Predicting Backdrafting and Spillage for Natural-Draft Gas Combustion Appliances: Validating VENT-II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rapp, Vi H.; Pastor-Perez, Albert; Singer, Brett C.
2013-04-01
VENT-II is a computer program designed to provide detailed analysis of natural draft and induced draft combustion appliance vent-systems (i.e., furnace or water heater). This program is capable of predicting house depressurization thresholds that lead to backdrafting and spillage of combustion appliances; however, validation reports of the program being applied for this purpose are not readily available. The purpose of this report is to assess VENT-II’s ability to predict combustion gas spillage events due to house depressurization by comparing VENT-II simulated results with experimental data for four appliance configurations. The results show that VENT-II correctly predicts depressurizations resulting in spillagemore » for natural draft appliances operating in cold and mild outdoor conditions, but not for hot conditions. In the latter case, the predicted depressurizations depend on whether the vent section is defined as part of the vent connector or the common vent when setting up the model. Overall, the VENTII solver requires further investigation before it can be used reliably to predict spillage caused by depressurization over a full year of weather conditions, especially where hot conditions occur.« less
Annotating spatio-temporal datasets for meaningful analysis in the Web
NASA Astrophysics Data System (ADS)
Stasch, Christoph; Pebesma, Edzer; Scheider, Simon
2014-05-01
More and more environmental datasets that vary in space and time are available in the Web. This comes along with an advantage of using the data for other purposes than originally foreseen, but also with the danger that users may apply inappropriate analysis procedures due to lack of important assumptions made during the data collection process. In order to guide towards a meaningful (statistical) analysis of spatio-temporal datasets available in the Web, we have developed a Higher-Order-Logic formalism that captures some relevant assumptions in our previous work [1]. It allows to proof on meaningful spatial prediction and aggregation in a semi-automated fashion. In this poster presentation, we will present a concept for annotating spatio-temporal datasets available in the Web with concepts defined in our formalism. Therefore, we have defined a subset of the formalism as a Web Ontology Language (OWL) pattern. It allows capturing the distinction between the different spatio-temporal variable types, i.e. point patterns, fields, lattices and trajectories, that in turn determine whether a particular dataset can be interpolated or aggregated in a meaningful way using a certain procedure. The actual annotations that link spatio-temporal datasets with the concepts in the ontology pattern are provided as Linked Data. In order to allow data producers to add the annotations to their datasets, we have implemented a Web portal that uses a triple store at the backend to store the annotations and to make them available in the Linked Data cloud. Furthermore, we have implemented functions in the statistical environment R to retrieve the RDF annotations and, based on these annotations, to support a stronger typing of spatio-temporal datatypes guiding towards a meaningful analysis in R. [1] Stasch, C., Scheider, S., Pebesma, E., Kuhn, W. (2014): "Meaningful spatial prediction and aggregation", Environmental Modelling & Software, 51, 149-165.
Parry, S; Denehy, L; Berney, S; Browning, L
2014-03-01
(1) To determine the ability of the Melbourne risk prediction tool to predict a pulmonary complication as defined by the Melbourne Group Scale in a medically defined high-risk upper abdominal surgery population during the postoperative period; (2) to identify the incidence of postoperative pulmonary complications; and (3) to examine the risk factors for postoperative pulmonary complications in this high-risk population. Observational cohort study. Tertiary Australian referral centre. 50 individuals who underwent medically defined high-risk upper abdominal surgery. Presence of postoperative pulmonary complications was screened daily for seven days using the Melbourne Group Scale (Version 2). Postoperative pulmonary risk prediction was calculated according to the Melbourne risk prediction tool. (1) Melbourne risk prediction tool; and (2) the incidence of postoperative pulmonary complications. Sixty-six percent (33/50) underwent hepatobiliary or upper gastrointestinal surgery. Mean (SD) anaesthetic duration was 377.8 (165.5) minutes. The risk prediction tool classified 84% (42/50) as high risk. Overall postoperative pulmonary complication incidence was 42% (21/50). The tool was 91% sensitive and 21% specific with a 50% chance of correct classification. This is the first study to externally validate the Melbourne risk prediction tool in an independent medically defined high-risk population. There was a higher incidence of pulmonary complications postoperatively observed compared to that previously reported. Results demonstrated poor validity of the tool in a population already defined medically as high risk and when applied postoperatively. This observational study has identified several important points to consider in future trials. Copyright © 2013 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Crowley, R Webster; Asthagiri, Ashok R; Starke, Robert M; Zusman, Edie E; Chiocca, E Antonio; Lonser, Russell R
2012-04-01
Factors during neurosurgical residency that are predictive of an academic career path and promotion have not been defined. To determine factors associated with selecting and sustaining an academic career in neurosurgery by analyzing in-training factors for all graduates of American College of Graduate Medical Education (ACGME)-accredited programs between 1985 and 1990. Neurological surgery residency graduates (between 1985 and 1990) from ACGME-approved training programs were analyzed to determine factors associated with choosing an academic career path and having academic success. Information was available for 717 of the 720 (99%) neurological surgery resident training graduates (678 male, 39 female). One hundred thirty-eight graduates (19.3%) held full-time academic positions. One hundred seven (14.9%) were professors and 35 (4.9%) were department chairs/chiefs. An academic career path/success was associated with more total (5.1 vs 1.9; P < .001) and first-author publications (3.0 vs 1.0; P < .001) during residency. Promotion to professor or chair/chief was associated with more publications during residency (P < .001). Total publications and first-author publications were independent predictors of holding a current academic position and becoming professor or chair/chief. Although male trainees published more than female trainees (2.6 vs 0.9 publications; P < .004) during training, no significant sex difference was observed regarding current academic position. Program size (≥ 2 graduates a year; P = .02) was predictive of an academic career but not predictive of becoming professor or chair/chief (P > .05). Defined in-training factors including number of total publications, number of first-author publications, and program size are predictive of residents choosing and succeeding in an academic career path.
Hessel, Ellen V S; Staal, Yvonne C M; Piersma, Aldert H
2018-03-13
Developmental neurotoxicity entails one of the most complex areas in toxicology. Animal studies provide only limited information as to human relevance. A multitude of alternative models have been developed over the years, providing insights into mechanisms of action. We give an overview of fundamental processes in neural tube formation, brain development and neural specification, aiming at illustrating complexity rather than comprehensiveness. We also give a flavor of the wealth of alternative methods in this area. Given the impressive progress in mechanistic knowledge of human biology and toxicology, the time is right for a conceptual approach for designing testing strategies that cover the integral mechanistic landscape of developmental neurotoxicity. The ontology approach provides a framework for defining this landscape, upon which an integral in silico model for predicting toxicity can be built. It subsequently directs the selection of in vitro assays for rate-limiting events in the biological network, to feed parameter tuning in the model, leading to prediction of the toxicological outcome. Validation of such models requires primary attention to coverage of the biological domain, rather than classical predictive value of individual tests. Proofs of concept for such an approach are already available. The challenge is in mining modern biology, toxicology and chemical information to feed intelligent designs, which will define testing strategies for neurodevelopmental toxicity testing. Copyright © 2018 Elsevier Inc. All rights reserved.
Li, Xiaohong; Blount, Patricia L; Vaughan, Thomas L; Reid, Brian J
2011-02-01
Aside from primary prevention, early detection remains the most effective way to decrease mortality associated with the majority of solid cancers. Previous cancer screening models are largely based on classification of at-risk populations into three conceptually defined groups (normal, cancer without symptoms, and cancer with symptoms). Unfortunately, this approach has achieved limited successes in reducing cancer mortality. With advances in molecular biology and genomic technologies, many candidate somatic genetic and epigenetic "biomarkers" have been identified as potential predictors of cancer risk. However, none have yet been validated as robust predictors of progression to cancer or shown to reduce cancer mortality. In this Perspective, we first define the necessary and sufficient conditions for precise prediction of future cancer development and early cancer detection within a simple physical model framework. We then evaluate cancer risk prediction and early detection from a dynamic clonal evolution point of view, examining the implications of dynamic clonal evolution of biomarkers and the application of clonal evolution for cancer risk management in clinical practice. Finally, we propose a framework to guide future collaborative research between mathematical modelers and biomarker researchers to design studies to investigate and model dynamic clonal evolution. This approach will allow optimization of available resources for cancer control and intervention timing based on molecular biomarkers in predicting cancer among various risk subsets that dynamically evolve over time.
SLiMSearch 2.0: biological context for short linear motifs in proteins
Davey, Norman E.; Haslam, Niall J.; Shields, Denis C.
2011-01-01
Short, linear motifs (SLiMs) play a critical role in many biological processes. The SLiMSearch 2.0 (Short, Linear Motif Search) web server allows researchers to identify occurrences of a user-defined SLiM in a proteome, using conservation and protein disorder context statistics to rank occurrences. User-friendly output and visualizations of motif context allow the user to quickly gain insight into the validity of a putatively functional motif occurrence. For each motif occurrence, overlapping UniProt features and annotated SLiMs are displayed. Visualization also includes annotated multiple sequence alignments surrounding each occurrence, showing conservation and protein disorder statistics in addition to known and predicted SLiMs, protein domains and known post-translational modifications. In addition, enrichment of Gene Ontology terms and protein interaction partners are provided as indicators of possible motif function. All web server results are available for download. Users can search motifs against the human proteome or a subset thereof defined by Uniprot accession numbers or GO term. The SLiMSearch server is available at: http://bioware.ucd.ie/slimsearch2.html. PMID:21622654
Life prediction modeling based on cyclic damage accumulation
NASA Technical Reports Server (NTRS)
Nelson, Richard S.
1988-01-01
A high temperature, low cycle fatigue life prediction method was developed. This method, Cyclic Damage Accumulation (CDA), was developed for use in predicting the crack initiation lifetime of gas turbine engine materials, where initiation was defined as a 0.030 inch surface length crack. A principal engineering feature of the CDA method is the minimum data base required for implementation. Model constants can be evaluated through a few simple specimen tests such as monotonic loading and rapic cycle fatigue. The method was expanded to account for the effects on creep-fatigue life of complex loadings such as thermomechanical fatigue, hold periods, waveshapes, mean stresses, multiaxiality, cumulative damage, coatings, and environmental attack. A significant data base was generated on the behavior of the cast nickel-base superalloy B1900+Hf, including hundreds of specimen tests under such loading conditions. This information is being used to refine and extend the CDA life prediction model, which is now nearing completion. The model is also being verified using additional specimen tests on wrought INCO 718, and the final version of the model is expected to be adaptable to most any high-temperature alloy. The model is currently available in the form of equations and related constants. A proposed contract addition will make the model available in the near future in the form of a computer code to potential users.
Habibi, Narjeskhatoon; Mohd Hashim, Siti Z; Norouzi, Alireza; Samian, Mohammed Razip
2014-05-08
Over the last 20 years in biotechnology, the production of recombinant proteins has been a crucial bioprocess in both biopharmaceutical and research arena in terms of human health, scientific impact and economic volume. Although logical strategies of genetic engineering have been established, protein overexpression is still an art. In particular, heterologous expression is often hindered by low level of production and frequent fail due to opaque reasons. The problem is accentuated because there is no generic solution available to enhance heterologous overexpression. For a given protein, the extent of its solubility can indicate the quality of its function. Over 30% of synthesized proteins are not soluble. In certain experimental circumstances, including temperature, expression host, etc., protein solubility is a feature eventually defined by its sequence. Until now, numerous methods based on machine learning are proposed to predict the solubility of protein merely from its amino acid sequence. In spite of the 20 years of research on the matter, no comprehensive review is available on the published methods. This paper presents an extensive review of the existing models to predict protein solubility in Escherichia coli recombinant protein overexpression system. The models are investigated and compared regarding the datasets used, features, feature selection methods, machine learning techniques and accuracy of prediction. A discussion on the models is provided at the end. This study aims to investigate extensively the machine learning based methods to predict recombinant protein solubility, so as to offer a general as well as a detailed understanding for researches in the field. Some of the models present acceptable prediction performances and convenient user interfaces. These models can be considered as valuable tools to predict recombinant protein overexpression results before performing real laboratory experiments, thus saving labour, time and cost.
Advanced Cloud Forecasting for Solar Energy Production
NASA Astrophysics Data System (ADS)
Werth, D. W.; Parker, M. J.
2017-12-01
A power utility must decide days in advance how it will allocate projected loads among its various generating sources. If the latter includes solar plants, the utility must predict how much energy the plants will produce - any shortfall will have to be compensated for by purchasing power as it is needed, when it is more expensive. To avoid this, utilities often err on the side of caution and assume that a relatively small amount of solar energy will be available, and allocate correspondingly more load to coal-fired plants. If solar irradiance can be predicted more accurately, utilities can be more confident that the predicted solar energy will indeed be available when needed, and assign solar plants a larger share of the future load. Solar power production is increasing in the Southeast, but is often hampered by irregular cloud fields, especially during high-pressure periods when rapid afternoon thunderstorm development can occur during what was predicted to be a clear day. We are currently developing an analog forecasting system to predict solar irradiance at the surface at the Savannah River Site in South Carolina, with the goal of improving predictions of available solar energy. Analog forecasting is based on the assumption that similar initial conditions will lead to similar outcomes, and involves the use of an algorithm to look through the weather patterns of the past to identify previous conditions (the analogs) similar to those of today. For our application, we select three predictor variables - sea-level pressure, 700mb geopotential, and 700mb humidity. These fields for the current day are compared to those from past days, and a weighted combination of the differences (defined by a cost function) is used to select the five best analog days. The observed solar irradiance values subsequent to the dates of those analogs are then combined to represent the forecast for the next day. We will explain how we apply the analog process, and compare it to existing solar forecasts.
Ali, Mehreen; Khan, Suleiman A; Wennerberg, Krister; Aittokallio, Tero
2018-04-15
Proteomics profiling is increasingly being used for molecular stratification of cancer patients and cell-line panels. However, systematic assessment of the predictive power of large-scale proteomic technologies across various drug classes and cancer types is currently lacking. To that end, we carried out the first pan-cancer, multi-omics comparative analysis of the relative performance of two proteomic technologies, targeted reverse phase protein array (RPPA) and global mass spectrometry (MS), in terms of their accuracy for predicting the sensitivity of cancer cells to both cytotoxic chemotherapeutics and molecularly targeted anticancer compounds. Our results in two cell-line panels demonstrate how MS profiling improves drug response predictions beyond that of the RPPA or the other omics profiles when used alone. However, frequent missing MS data values complicate its use in predictive modeling and required additional filtering, such as focusing on completely measured or known oncoproteins, to obtain maximal predictive performance. Rather strikingly, the two proteomics profiles provided complementary predictive signal both for the cytotoxic and targeted compounds. Further, information about the cellular-abundance of primary target proteins was found critical for predicting the response of targeted compounds, although the non-target features also contributed significantly to the predictive power. The clinical relevance of the selected protein markers was confirmed in cancer patient data. These results provide novel insights into the relative performance and optimal use of the widely applied proteomic technologies, MS and RPPA, which should prove useful in translational applications, such as defining the best combination of omics technologies and marker panels for understanding and predicting drug sensitivities in cancer patients. Processed datasets, R as well as Matlab implementations of the methods are available at https://github.com/mehr-een/bemkl-rbps. mehreen.ali@helsinki.fi or tero.aittokallio@fimm.fi. Supplementary data are available at Bioinformatics online.
Systematic discovery and characterization of fly microRNAs using 12 Drosophila genomes
Stark, Alexander; Kheradpour, Pouya; Parts, Leopold; Brennecke, Julius; Hodges, Emily; Hannon, Gregory J.; Kellis, Manolis
2007-01-01
MicroRNAs (miRNAs) are short regulatory RNAs that inhibit target genes by complementary binding in 3′ untranslated regions (3′ UTRs). They are one of the most abundant classes of regulators, targeting a large fraction of all genes, making their comprehensive study a requirement for understanding regulation and development. Here we use 12 Drosophila genomes to define structural and evolutionary signatures of miRNA hairpins, which we use for their de novo discovery. We predict >41 novel miRNA genes, which encompass many unique families, and 28 of which are validated experimentally. We also define signals for the precise start position of mature miRNAs, which suggest corrections of previously known miRNAs, often leading to drastic changes in their predicted target spectrum. We show that miRNA discovery power scales with the number and divergence of species compared, suggesting that such approaches can be successful in human as dozens of mammalian genomes become available. Interestingly, for some miRNAs sense and anti-sense hairpins score highly and mature miRNAs from both strands can indeed be found in vivo. Similarly, miRNAs with weak 5′ end predictions show increased in vivo processing of multiple alternate 5′ ends and have fewer predicted targets. Lastly, we show that several miRNA star sequences score highly and are likely functional. For mir-10 in particular, both arms show abundant processing, and both show highly conserved target sites in Hox genes, suggesting a possible cooperation of the two arms, and their role as a master Hox regulator. PMID:17989255
Proceedings of the vertical axis wind turbine (VAWT) design technology seminar for industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, S.F. Jr.
1980-08-01
The objective of the Vertical Axis Wind Turbine (VAWT) Program at Sandia National Laboratories is to develop technology that results in economical, industry-produced, and commercially marketable wind energy systems. The purpose of the VAWT Design Technology Seminar or Industry was to provide for the exchange of the current state-of-the-art and predictions for future VAWT technology. Emphasis was placed on technology transfer on Sandia's technical developments and on defining the available analytic and design tools. Separate abstracts are included for presented papers.
Ganau, Mario; Paris, Marco; Syrmos, Nikolaos; Ganau, Laura; Ligarotti, Gianfranco K I; Moghaddamjou, Ali; Prisco, Lara; Ambu, Rossano; Chibbaro, Salvatore
2018-02-26
The field of neuro-oncology is rapidly progressing and internalizing many of the recent discoveries coming from research conducted in basic science laboratories worldwide. This systematic review aims to summarize the impact of nanotechnology and biomedical engineering in defining clinically meaningful predictive biomarkers with a potential application in the management of patients with brain tumors. Data were collected through a review of the existing English literature performed on Scopus, MEDLINE, MEDLINE in Process, EMBASE, and/or Cochrane Central Register of Controlled Trials: all available basic science and clinical papers relevant to address the above-stated research question were included and analyzed in this study. Based on the results of this systematic review we can conclude that: (1) the advances in nanotechnology and bioengineering are supporting tremendous efforts in optimizing the methods for genomic, epigenomic and proteomic profiling; (2) a successful translational approach is attempting to identify a growing number of biomarkers, some of which appear to be promising candidates in many areas of neuro-oncology; (3) the designing of Randomized Controlled Trials will be warranted to better define the prognostic value of those biomarkers and biosignatures.
Toropova, A P; Toropov, A A; Benfenati, E
2015-01-01
Most quantitative structure-property/activity relationships (QSPRs/QSARs) predict various endpoints related to organic compounds. Gradually, the variety of organic compounds has been extended to inorganic, organometallic compounds and polymers. However, the so-called molecular descriptors cannot be defined for super-complex substances such as different nanomaterials and peptides, since there is no simple and clear representation of their molecular structure. Some possible ways to define approaches for a predictive model in the case of super-complex substances are discussed. The basic idea of the approach is to change the traditionally used paradigm 'the endpoint is a mathematical function of the molecular structure' with another paradigm 'the endpoint is a mathematical function of available eclectic information'. The eclectic data can be (i) conditions of a synthesis, (ii) technological attributes, (iii) size of nanoparticles, (iv) concentration, (v) attributes related to cell membranes, and so on. Two examples of quasi-QSPR/QSAR analyses are presented and discussed. These are (i) photocatalytic decolourization rate constants (DRC) (10(-5)/s) of different nanopowders; and (ii) the cellular viability under the effect of nano-SiO(2).
Rosa-Jiménez, Francisco; Rosa-Jiménez, Ascensión; Lozano-Rodríguez, Aquiles; Santoro-Martínez, María Del Carmen; Duro-López, María Del Carmen; Carreras-Álvarez de Cienfuegos, Amelia
2015-01-01
To compare the efficacy of the most familiar clinical prediction rules in combination with D-dimer testing to rule out a diagnosis of deep vein thrombosis (DVT) in a hospital emergency department. Retrospective cross-sectional analysis of the case records of all patients attending a hospital emergency department with suspected lower-limb DVT between 1998 and 2002. Ten clinical prediction scores were calculated and D-dimer levels were available for all patients. The gold standard was ultrasound diagnosis of DVT by an independent radiologist who was blinded to clinical records. For each prediction rule, we analyzed the effectiveness of the prediction strategy defined by "low clinical probability and negative D-dimer level" against the ultrasound diagnosis. A total of 861 case records were reviewed and 577 cases were selected; the mean (SD) age was 66.7 (14.2) years. DVT was diagnosed in 145 patients (25.1%). Only the Wells clinical prediction rule and 4 other models had a false negative rate under 2%. The Wells criteria and the score published by Johanning and colleagues identified higher percentages of cases (15.6% and 11.6%, respectively). This study shows that several clinical prediction rules can be safely used in the emergency department, although none of them have proven more effective than the Wells criteria.
Understanding pyrotechnic shock dynamics and response attenuation over distance
NASA Astrophysics Data System (ADS)
Ott, Richard J.
Pyrotechnic shock events used during stage separation on rocket vehicles produce high amplitude short duration structural response that can lead to malfunction or degradation of electronic components, cracks and fractures in brittle materials, local plastic deformation, and can cause materials to experience accelerated fatigue life. These transient loads propagate as waves through the structural media losing energy as they travel outward from the source. This work assessed available test data in an effort to better understand attenuation characteristics associated with wave propagation and attempted to update a historical standard defined by the Martin Marietta Corporation in the late 1960's using out of date data acquisition systems. Two data sets were available for consideration. The first data set came from a test that used a flight like cylinder used in NASA's Ares I-X program, and the second from a test conducted with a flat plate. Both data sets suggested that the historical standard was not a conservative estimate of shock attenuation with distance, however, the variation in the test data did not lend to recommending an update to the standard. Beyond considering attenuation with distance an effort was made to model the flat plate configuration using finite element analysis. The available flat plate data consisted of three groups of tests, each with a unique charge density linear shape charge (LSC) used to cut an aluminum plate. The model was tuned to a representative test using the lowest charge density LSC as input. The correlated model was then used to predict the other two cases by linearly scaling the input load based on the relative difference in charge density. The resulting model predictions were then compared with available empirical data. Aside from differences in amplitude due to nonlinearities associated with scaling the charge density of the LSC, the model predictions matched the available test data reasonably well. Finally, modeling best practices were recommended when using industry standard software to predict shock response on structures. As part of the best practices documented, a frequency dependent damping schedule that can be used in model development when no data is available is provided.
On predicting monitoring system effectiveness
NASA Astrophysics Data System (ADS)
Cappello, Carlo; Sigurdardottir, Dorotea; Glisic, Branko; Zonta, Daniele; Pozzi, Matteo
2015-03-01
While the objective of structural design is to achieve stability with an appropriate level of reliability, the design of systems for structural health monitoring is performed to identify a configuration that enables acquisition of data with an appropriate level of accuracy in order to understand the performance of a structure or its condition state. However, a rational standardized approach for monitoring system design is not fully available. Hence, when engineers design a monitoring system, their approach is often heuristic with performance evaluation based on experience, rather than on quantitative analysis. In this contribution, we propose a probabilistic model for the estimation of monitoring system effectiveness based on information available in prior condition, i.e. before acquiring empirical data. The presented model is developed considering the analogy between structural design and monitoring system design. We assume that the effectiveness can be evaluated based on the prediction of the posterior variance or covariance matrix of the state parameters, which we assume to be defined in a continuous space. Since the empirical measurements are not available in prior condition, the estimation of the posterior variance or covariance matrix is performed considering the measurements as a stochastic variable. Moreover, the model takes into account the effects of nuisance parameters, which are stochastic parameters that affect the observations but cannot be estimated using monitoring data. Finally, we present an application of the proposed model to a real structure. The results show how the model enables engineers to predict whether a sensor configuration satisfies the required performance.
Predictive QSAR modeling workflow, model applicability domains, and virtual screening.
Tropsha, Alexander; Golbraikh, Alexander
2007-01-01
Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.
Methods for predicting unsteady takeoff and landing trajectories of the aircraft
NASA Astrophysics Data System (ADS)
Shevchenko, A.; Pavlov, B.; Nachinkina, G.
2017-01-01
Informational and situational awareness of the aircrew greatly affects the probability of accidents, during takeoff and landing in particular. For the purpose of assessing the current and predicting the future states of an aircraft the energy approach to the flight control is used. Key energy balance equation is generalized to the ground phases. The equation describes the process of accumulating of the total energy of the aircraft along the entire trajectory, including the segment ahead. This segment length is defined by the required terminal energy state. For the takeoff phase the predict algorithm calculates the aircraft position on a runway after which it is possible to accelerate up to the speed of steady level flight and to reach the altitude sufficient for overcoming the high-rise obstacles. For the landing phase the braking distance length is determined. For increasing the likelihood of predicting the correction of the algorithm is introduced. The results of modeling many takeoffs and landings of passenger liner with different weights with the ahead obstacle and the engine failure are given. Working availability of the algorithm correction is shown.
A Large-Scale Assessment of Nucleic Acids Binding Site Prediction Programs
Miao, Zhichao; Westhof, Eric
2015-01-01
Computational prediction of nucleic acid binding sites in proteins are necessary to disentangle functional mechanisms in most biological processes and to explore the binding mechanisms. Several strategies have been proposed, but the state-of-the-art approaches display a great diversity in i) the definition of nucleic acid binding sites; ii) the training and test datasets; iii) the algorithmic methods for the prediction strategies; iv) the performance measures and v) the distribution and availability of the prediction programs. Here we report a large-scale assessment of 19 web servers and 3 stand-alone programs on 41 datasets including more than 5000 proteins derived from 3D structures of protein-nucleic acid complexes. Well-defined binary assessment criteria (specificity, sensitivity, precision, accuracy…) are applied. We found that i) the tools have been greatly improved over the years; ii) some of the approaches suffer from theoretical defects and there is still room for sorting out the essential mechanisms of binding; iii) RNA binding and DNA binding appear to follow similar driving forces and iv) dataset bias may exist in some methods. PMID:26681179
NetMHCcons: a consensus method for the major histocompatibility complex class I predictions.
Karosiene, Edita; Lundegaard, Claus; Lund, Ole; Nielsen, Morten
2012-03-01
A key role in cell-mediated immunity is dedicated to the major histocompatibility complex (MHC) molecules that bind peptides for presentation on the cell surface. Several in silico methods capable of predicting peptide binding to MHC class I have been developed. The accuracy of these methods depends on the data available characterizing the binding specificity of the MHC molecules. It has, moreover, been demonstrated that consensus methods defined as combinations of two or more different methods led to improved prediction accuracy. This plethora of methods makes it very difficult for the non-expert user to choose the most suitable method for predicting binding to a given MHC molecule. In this study, we have therefore made an in-depth analysis of combinations of three state-of-the-art MHC-peptide binding prediction methods (NetMHC, NetMHCpan and PickPocket). We demonstrate that a simple combination of NetMHC and NetMHCpan gives the highest performance when the allele in question is included in the training and is characterized by at least 50 data points with at least ten binders. Otherwise, NetMHCpan is the best predictor. When an allele has not been characterized, the performance depends on the distance to the training data. NetMHCpan has the highest performance when close neighbours are present in the training set, while the combination of NetMHCpan and PickPocket outperforms either of the two methods for alleles with more remote neighbours. The final method, NetMHCcons, is publicly available at www.cbs.dtu.dk/services/NetMHCcons , and allows the user in an automatic manner to obtain the most accurate predictions for any given MHC molecule.
Machine Learning Meta-analysis of Large Metagenomic Datasets: Tools and Biological Insights.
Pasolli, Edoardo; Truong, Duy Tin; Malik, Faizan; Waldron, Levi; Segata, Nicola
2016-07-01
Shotgun metagenomic analysis of the human associated microbiome provides a rich set of microbial features for prediction and biomarker discovery in the context of human diseases and health conditions. However, the use of such high-resolution microbial features presents new challenges, and validated computational tools for learning tasks are lacking. Moreover, classification rules have scarcely been validated in independent studies, posing questions about the generality and generalization of disease-predictive models across cohorts. In this paper, we comprehensively assess approaches to metagenomics-based prediction tasks and for quantitative assessment of the strength of potential microbiome-phenotype associations. We develop a computational framework for prediction tasks using quantitative microbiome profiles, including species-level relative abundances and presence of strain-specific markers. A comprehensive meta-analysis, with particular emphasis on generalization across cohorts, was performed in a collection of 2424 publicly available metagenomic samples from eight large-scale studies. Cross-validation revealed good disease-prediction capabilities, which were in general improved by feature selection and use of strain-specific markers instead of species-level taxonomic abundance. In cross-study analysis, models transferred between studies were in some cases less accurate than models tested by within-study cross-validation. Interestingly, the addition of healthy (control) samples from other studies to training sets improved disease prediction capabilities. Some microbial species (most notably Streptococcus anginosus) seem to characterize general dysbiotic states of the microbiome rather than connections with a specific disease. Our results in modelling features of the "healthy" microbiome can be considered a first step toward defining general microbial dysbiosis. The software framework, microbiome profiles, and metadata for thousands of samples are publicly available at http://segatalab.cibio.unitn.it/tools/metaml.
Early detection of Alzheimer disease: methods, markers, and misgivings.
Green, R C; Clarke, V C; Thompson, N J; Woodard, J L; Letz, R
1997-01-01
There is at present no reliable predictive test for most forms of Alzheimer disease (AD). Although some information about future risk for disease is available in theory through ApoE genotyping, it is of limited accuracy and utility. Once neuroprotective treatments are available for AD, reliable early detection will become a key component of the treatment strategy. We recently conducted a pilot survey eliciting attitudes and beliefs toward an unspecified and hypothetical predictive test for AD. The survey was completed by a convenience sample of 176 individuals, aged 22-77, which was 75% female, 30% African-American, and of which 33% had a family member with AD. The survey revealed that 69% of this sample would elect to obtain predictive testing for AD if the test were 100% accurate. Individuals were more likely to desire predictive testing if they had an a priori belief that they would develop AD (p = 0.0001), had a lower educational level (p = 0.003), were worried that they would develop AD (p = 0.02), had a self-defined history of depression (p = 0.04), and had a family member with AD (p = 0.04). However, the desire for predictive testing was not significantly associated with age, gender, ethnicity, or income. The desire to obtain predictive testing for AD decreased as the assumed accuracy of the hypothetical test decreased. A better short-term strategy for early detection of AD may be computer-based neuropsychological screening of at-risk (older aged) individuals to identify very early cognitive impairment. Individuals identified in this manner could be referred for diagnostic evaluation and early cases of AD could be identified and treated. A new self-administered, touch-screen, computer-based, neuropsychological screening instrument called Neurobehavioral Evaluation System-3 is described, which may facilitate this type of screening.
PharmDock: a pharmacophore-based docking program
2014-01-01
Background Protein-based pharmacophore models are enriched with the information of potential interactions between ligands and the protein target. We have shown in a previous study that protein-based pharmacophore models can be applied for ligand pose prediction and pose ranking. In this publication, we present a new pharmacophore-based docking program PharmDock that combines pose sampling and ranking based on optimized protein-based pharmacophore models with local optimization using an empirical scoring function. Results Tests of PharmDock on ligand pose prediction, binding affinity estimation, compound ranking and virtual screening yielded comparable or better performance to existing and widely used docking programs. The docking program comes with an easy-to-use GUI within PyMOL. Two features have been incorporated in the program suite that allow for user-defined guidance of the docking process based on previous experimental data. Docking with those features demonstrated superior performance compared to unbiased docking. Conclusion A protein pharmacophore-based docking program, PharmDock, has been made available with a PyMOL plugin. PharmDock and the PyMOL plugin are freely available from http://people.pharmacy.purdue.edu/~mlill/software/pharmdock. PMID:24739488
"Going to town": Large-scale norming and statistical analysis of 870 American English idioms.
Bulkes, Nyssa Z; Tanner, Darren
2017-04-01
An idiom is classically defined as a formulaic sequence whose meaning is comprised of more than the sum of its parts. For this reason, idioms pose a unique problem for models of sentence processing, as researchers must take into account how idioms vary and along what dimensions, as these factors can modulate the ease with which an idiomatic interpretation can be activated. In order to help ensure external validity and comparability across studies, idiom research benefits from the availability of publicly available resources reporting ratings from a large number of native speakers. Resources such as the one outlined in the current paper facilitate opportunities for consensus across studies on idiom processing and help to further our goals as a research community. To this end, descriptive norms were obtained for 870 American English idioms from 2,100 participants along five dimensions: familiarity, meaningfulness, literal plausibility, global decomposability, and predictability. Idiom familiarity and meaningfulness strongly correlated with one another, whereas familiarity and meaningfulness were positively correlated with both global decomposability and predictability. Correlations with previous norming studies are also discussed.
Altan, Irem; Charbonneau, Patrick; Snell, Edward H.
2016-01-01
Crystallization is a key step in macromolecular structure determination by crystallography. While a robust theoretical treatment of the process is available, due to the complexity of the system, the experimental process is still largely one of trial and error. In this article, efforts in the field are discussed together with a theoretical underpinning using a solubility phase diagram. Prior knowledge has been used to develop tools that computationally predict the crystallization outcome and define mutational approaches that enhance the likelihood of crystallization. For the most part these tools are based on binary outcomes (crystal or no crystal), and the full information contained in an assembly of crystallization screening experiments is lost. The potential of this additional information is illustrated by examples where new biological knowledge can be obtained and where a target can be sub-categorized to predict which class of reagents provides the crystallization driving force. Computational analysis of crystallization requires complete and correctly formatted data. While massive crystallization screening efforts are under way, the data available from many of these studies are sparse. The potential for this data and the steps needed to realize this potential are discussed. PMID:26792536
Life Extending Control. [mechanical fatigue in reusable rocket engines
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Merrill, Walter C.
1991-01-01
The concept of Life Extending Control is defined. Life is defined in terms of mechanical fatigue life. A brief description is given of the current approach to life prediction using a local, cyclic, stress-strain approach for a critical system component. An alternative approach to life prediction based on a continuous functional relationship to component performance is proposed. Based on cyclic life prediction, an approach to life extending control, called the Life Management Approach, is proposed. A second approach, also based on cyclic life prediction, called the implicit approach, is presented. Assuming the existence of the alternative functional life prediction approach, two additional concepts for Life Extending Control are presented.
Life extending control: A concept paper
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Merrill, Walter C.
1991-01-01
The concept of Life Extending Control is defined. Life is defined in terms of mechanical fatigue life. A brief description is given of the current approach to life prediction using a local, cyclic, stress-strain approach for a critical system component. An alternative approach to life prediction based on a continuous functional relationship to component performance is proposed.Base on cyclic life prediction an approach to Life Extending Control, called the Life Management Approach is proposed. A second approach, also based on cyclic life prediction, called the Implicit Approach, is presented. Assuming the existence of the alternative functional life prediction approach, two additional concepts for Life Extending Control are presented.
Direct Measurements of Interplanetary Dust Particles in the Vicinity of Earth
NASA Technical Reports Server (NTRS)
McCracken, C. W.; Alexander, W. M.; Dubin, M.
1961-01-01
The direct measurements made by the Explorer VIII satellite provide the first sound basis for analyzing all available direct measurements of the distribution of interplanetary dust particles. The model average distribution curve established by such an analysis departs significantly from that predicted by the (uncertain) extrapolation of results from meteor observations. A consequence of this difference is that the daily accretion of interplanetary particulate matter by the earth is now considered to be mainly dust particles of the direct measurements range of particle size. Almost all the available direct measurements obtained with microphone systems on rockets, satellites, and spacecraft fit directly on the distribution curve defined by Explorer VIII data. The lack of reliable datum points departing significantly from the model average distribution curve means that available direct measurements show no discernible evidence of an appreciable geocentric concentration of interplanetary dust particles.
Predicting Dengue Fever Outbreaks in French Guiana Using Climate Indicators.
Adde, Antoine; Roucou, Pascal; Mangeas, Morgan; Ardillon, Vanessa; Desenclos, Jean-Claude; Rousset, Dominique; Girod, Romain; Briolant, Sébastien; Quenel, Philippe; Flamand, Claude
2016-04-01
Dengue fever epidemic dynamics are driven by complex interactions between hosts, vectors and viruses. Associations between climate and dengue have been studied around the world, but the results have shown that the impact of the climate can vary widely from one study site to another. In French Guiana, climate-based models are not available to assist in developing an early warning system. This study aims to evaluate the potential of using oceanic and atmospheric conditions to help predict dengue fever outbreaks in French Guiana. Lagged correlations and composite analyses were performed to identify the climatic conditions that characterized a typical epidemic year and to define the best indices for predicting dengue fever outbreaks during the period 1991-2013. A logistic regression was then performed to build a forecast model. We demonstrate that a model based on summer Equatorial Pacific Ocean sea surface temperatures and Azores High sea-level pressure had predictive value and was able to predict 80% of the outbreaks while incorrectly predicting only 15% of the non-epidemic years. Predictions for 2014-2015 were consistent with the observed non-epidemic conditions, and an outbreak in early 2016 was predicted. These findings indicate that outbreak resurgence can be modeled using a simple combination of climate indicators. This might be useful for anticipating public health actions to mitigate the effects of major outbreaks, particularly in areas where resources are limited and medical infrastructures are generally insufficient.
Implications of the principle of maximum conformality for the QCD strong coupling
Deur, Alexandre; Shen, Jian -Ming; Wu, Xing -Gang; ...
2017-08-14
The Principle of Maximum Conformality (PMC) provides scale-fixed perturbative QCD predictions which are independent of the choice of the renormalization scheme, as well as the choice of the initial renormalization scale. In this article, we will test the PMC by comparing its predictions for the strong couplingmore » $$\\alpha^s_{g_1}(Q)$$, defined from the Bjorken sum rule, with predictions using conventional pQCD scale-setting. The two results are found to be compatible with each other and with the available experimental data. However, the PMC provides a significantly more precise determination, although its domain of applicability ($$Q \\gtrsim 1.5$$ GeV) does not extend to as small values of momentum transfer as that of a conventional pQCD analysis ($$Q \\gtrsim 1$$ GeV). In conclusion, we suggest that the PMC range of applicability could be improved by a modified intermediate scheme choice or using a single effective PMC scale.« less
Validating Whole-Airway CFD Predictions of DPI Aerosol Deposition at Multiple Flow Rates.
Longest, P Worth; Tian, Geng; Khajeh-Hosseini-Dalasm, Navvab; Hindle, Michael
2016-12-01
The objective of this study was to compare aerosol deposition predictions of a new whole-airway CFD model with available in vivo data for a dry powder inhaler (DPI) considered across multiple inhalation waveforms, which affect both the particle size distribution (PSD) and particle deposition. The Novolizer DPI with a budesonide formulation was selected based on the availability of 2D gamma scintigraphy data in humans for three different well-defined inhalation waveforms. Initial in vitro cascade impaction experiments were conducted at multiple constant (square-wave) particle sizing flow rates to characterize PSDs. The whole-airway CFD modeling approach implemented the experimentally determined PSDs at the point of aerosol formation in the inhaler. Complete characteristic airway geometries for an adult were evaluated through the lobar bronchi, followed by stochastic individual pathway (SIP) approximations through the tracheobronchial region and new acinar moving wall models of the alveolar region. It was determined that the PSD used for each inhalation waveform should be based on a constant particle sizing flow rate equal to the average of the inhalation waveform's peak inspiratory flow rate (PIFR) and mean flow rate [i.e., AVG(PIFR, Mean)]. Using this technique, agreement with the in vivo data was acceptable with <15% relative differences averaged across the three regions considered for all inhalation waveforms. Defining a peripheral to central deposition ratio (P/C) based on alveolar and tracheobronchial compartments, respectively, large flow-rate-dependent differences were observed, which were not evident in the original 2D in vivo data. The agreement between the CFD predictions and in vivo data was dependent on accurate initial estimates of the PSD, emphasizing the need for a combination in vitro-in silico approach. Furthermore, use of the AVG(PIFR, Mean) value was identified as a potentially useful method for characterizing a DPI aerosol at a constant flow rate.
Validating Whole-Airway CFD Predictions of DPI Aerosol Deposition at Multiple Flow Rates
Tian, Geng; Khajeh-Hosseini-Dalasm, Navvab; Hindle, Michael
2016-01-01
Abstract Background: The objective of this study was to compare aerosol deposition predictions of a new whole-airway CFD model with available in vivo data for a dry powder inhaler (DPI) considered across multiple inhalation waveforms, which affect both the particle size distribution (PSD) and particle deposition. Methods: The Novolizer DPI with a budesonide formulation was selected based on the availability of 2D gamma scintigraphy data in humans for three different well-defined inhalation waveforms. Initial in vitro cascade impaction experiments were conducted at multiple constant (square-wave) particle sizing flow rates to characterize PSDs. The whole-airway CFD modeling approach implemented the experimentally determined PSDs at the point of aerosol formation in the inhaler. Complete characteristic airway geometries for an adult were evaluated through the lobar bronchi, followed by stochastic individual pathway (SIP) approximations through the tracheobronchial region and new acinar moving wall models of the alveolar region. Results: It was determined that the PSD used for each inhalation waveform should be based on a constant particle sizing flow rate equal to the average of the inhalation waveform's peak inspiratory flow rate (PIFR) and mean flow rate [i.e., AVG(PIFR, Mean)]. Using this technique, agreement with the in vivo data was acceptable with <15% relative differences averaged across the three regions considered for all inhalation waveforms. Defining a peripheral to central deposition ratio (P/C) based on alveolar and tracheobronchial compartments, respectively, large flow-rate-dependent differences were observed, which were not evident in the original 2D in vivo data. Conclusions: The agreement between the CFD predictions and in vivo data was dependent on accurate initial estimates of the PSD, emphasizing the need for a combination in vitro–in silico approach. Furthermore, use of the AVG(PIFR, Mean) value was identified as a potentially useful method for characterizing a DPI aerosol at a constant flow rate. PMID:27082824
Prediction and Stability of Reading Problems in Middle Childhood
ERIC Educational Resources Information Center
Ritchey, Kristen D.; Silverman, Rebecca D.; Schatschneider, Christopher; Speece, Deborah L.
2015-01-01
The longitudinal prediction of reading problems from fourth grade to sixth grade was investigated with a sample of 173 students. Reading problems at the end of sixth grade were defined by significantly below average performance (= 15th percentile) on reading factors defining word reading, fluency, and reading comprehension. Sixth grade poor reader…
Verdeli, Helen; Wickramaratne, Priya; Warner, Virginia; Mancini, Anthony; Weissman, Myrna
2014-01-01
Understanding differences in factors leading to positive outcomes in high-risk and low-risk offspring has important implications for preventive interventions. We identified variables predicting positive outcomes in a cohort of 235 offspring from 76 families in which one, both, or neither parent had major depressive disorder. Positive outcomes were termed resilient in offspring of depressed parents, and competent in offspring of non-depressed parents, and defined by two separate criteria: absence of psychiatric diagnosis and consistently high functioning at 2, 10, and 20 years follow-up. In offspring of depressed parents, easier temperament and higher self-esteem were associated with greater odds of resilient outcome defined by absence of diagnosis. Lower maternal overprotection, greater offspring self-esteem, and higher IQ were associated with greater odds of resilient outcome defined by consistently high functioning. Multivariate analysis indicated that resilient outcome defined by absence of diagnosis was best predicted by offspring self-esteem; resilient outcome defined by functioning was best predicted by maternal overprotection and self-esteem. Among offspring of non-depressed parents, greater family cohesion, easier temperament and higher self-esteem were associated with greater odds of offspring competent outcome defined by absence of diagnosis. Higher maternal affection and greater offspring self-esteem were associated with greater odds of competent outcome, defined by consistently high functioning. Multivariate analysis for each criterion indicated that competent outcome was best predicted by offspring self-esteem. As the most robust predictor of positive outcomes in offspring of depressed and non-depressed parents, self-esteem is an important target for youth preventive interventions. PMID:25374449
Machine Learning for Flood Prediction in Google Earth Engine
NASA Astrophysics Data System (ADS)
Kuhn, C.; Tellman, B.; Max, S. A.; Schwarz, B.
2015-12-01
With the increasing availability of high-resolution satellite imagery, dynamic flood mapping in near real time is becoming a reachable goal for decision-makers. This talk describes a newly developed framework for predicting biophysical flood vulnerability using public data, cloud computing and machine learning. Our objective is to define an approach to flood inundation modeling using statistical learning methods deployed in a cloud-based computing platform. Traditionally, static flood extent maps grounded in physically based hydrologic models can require hours of human expertise to construct at significant financial cost. In addition, desktop modeling software and limited local server storage can impose restraints on the size and resolution of input datasets. Data-driven, cloud-based processing holds promise for predictive watershed modeling at a wide range of spatio-temporal scales. However, these benefits come with constraints. In particular, parallel computing limits a modeler's ability to simulate the flow of water across a landscape, rendering traditional routing algorithms unusable in this platform. Our project pushes these limits by testing the performance of two machine learning algorithms, Support Vector Machine (SVM) and Random Forests, at predicting flood extent. Constructed in Google Earth Engine, the model mines a suite of publicly available satellite imagery layers to use as algorithm inputs. Results are cross-validated using MODIS-based flood maps created using the Dartmouth Flood Observatory detection algorithm. Model uncertainty highlights the difficulty of deploying unbalanced training data sets based on rare extreme events.
Merchant, Faisal M; Heist, E Kevin; Nandigam, K Veena; Mulligan, Lawrence J; Blendea, Dan; Riedl, Lindsay; McCarty, David; Orencole, Mary; Picard, Michael H; Ruskin, Jeremy N; Singh, Jagmeet P
2010-05-01
Both anatomic interlead separation and left ventricle lead electrical delay (LVLED) have been associated with outcomes following cardiac resynchronization therapy (CRT). However, the relationship between interlead distance and electrical delay in predicting CRT outcomes has not been defined. We studied 61 consecutive patients undergoing CRT for standard clinical indications. All patients underwent intraprocedural measurement of LVLED. Interlead distances in the horizontal (HD), vertical (VD), and direct (DD) dimensions were measured from postprocedure chest radiographs (CXR). Remodeling indices [percent change in left ventricle (LV) ejection fraction, end-diastolic, end-systolic dimensions] were assessed by transthoracic echocardiogram. There was a positive correlation between corrected LVLED and HD on lateral CXR (r = 0.361, P = 0.004) and a negative correlation between LVLED and VD on posteroanterior (PA) CXR (r =-0.281, P = 0.028). To account for this inverse relationship, we developed a composite anatomic distance (defined as: lateral HD-PA VD), which correlated most closely with LVLED (r = 0.404, P = 0.001). Follow-up was available for 48 patients. At a mean of 4.1 +/- 3.2 months, patients with optimal values for both corrected LVLED (>or=75%) and composite anatomic distance (>or=15 cm) demonstrated greater reverse LV remodeling than patients with either one or neither of these optimized values. We identified a significant correlation between LV-right ventricular interlead distance and LVLED; additionally, both parameters act synergistically in predicting LV anatomic reverse remodeling. Efforts to optimize both interlead distance and electrical delay may improve CRT outcomes.
Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.
2016-06-30
Wildfire can significantly alter the hydrologic response of a watershed to the extent that even modest rainstorms can generate dangerous flash floods and debris flows. To reduce public exposure to hazard, the U.S. Geological Survey produces post-fire debris-flow hazard assessments for select fires in the western United States. We use publicly available geospatial data describing basin morphology, burn severity, soil properties, and rainfall characteristics to estimate the statistical likelihood that debris flows will occur in response to a storm of a given rainfall intensity. Using an empirical database and refined geospatial analysis methods, we defined new equations for the prediction of debris-flow likelihood using logistic regression methods. We showed that the new logistic regression model outperformed previous models used to predict debris-flow likelihood.
Mervis, C B; Bertrand, J
1995-11-01
Acquisition of the novel name-nameless category (N3C) principle by 22 children with Down syndrome between the ages of 2.42 and 3.33 years was examined to investigate the generalizability of a new approach to early lexical development: the developmental lexical principles framework. Results indicated that, as predicted, the N3C principle (operationally defined as the ability to fast map a new word to a [basic level] category), is not available at the start of lexical acquisition. The predicted link between ability to use the N3C principle and ability to perform exhaustive categorization of objects was supported. Children who used the principle had significantly larger productive vocabularies than did those who did not and, according to maternal report, had begun to acquire new words rapidly.
Surgical resource utilization in urban terrorist bombing: a computer simulation.
Hirshberg, A; Stein, M; Walden, R
1999-09-01
The objective of this study was to analyze the utilization of surgical staff and facilities during an urban terrorist bombing incident. A discrete-event computer model of the emergency room and related hospital facilities was constructed and implemented, based on cumulated data from 12 urban terrorist bombing incidents in Israel. The simulation predicts that the admitting capacity of the hospital depends primarily on the number of available surgeons and defines an optimal staff profile for surgeons, residents, and trauma nurses. The major bottlenecks in the flow of critical casualties are the shock rooms and the computed tomographic scanner but not the operating rooms. The simulation also defines the number of reinforcement staff needed to treat noncritical casualties and shows that radiology is the major obstacle to the flow of these patients. Computer simulation is an important new tool for the optimization of surgical service elements for a multiple-casualty situation.
Classification of Aircraft Maneuvers for Fault Detection
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.; Clancy, Daniel (Technical Monitor)
2002-01-01
Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data is a reasonable match to known examples of proper operation. In our domain of fault detection in aircraft, the first assumption is unreasonable and the second is difficult to determine. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. We explain where this subsystem fits into our envisioned fault detection system as well its experiments showing the promise of this classification subsystem.
Primary aldosteronism: results of adrenalectomy for nonsingle adenoma.
Quillo, Amy R; Grant, Clive S; Thompson, Geoffrey B; Farley, David R; Richards, Melanie L; Young, William F
2011-07-01
Historically, treatment of confirmed primary aldosteronism has been adrenalectomy for unilateral adenoma; bilateral hypersecretion is treated medically. Increasingly, we use adrenal venous sampling (AVS) to define unilateral hypersecretion. Histology of glands resected based on AVS often reveals multiple nodules or hyperplasia. The aim of this study was to compare patients with multiple nodules or hyperplasia with those with single adenoma with regard to cure, preoperative imaging, AVS ratio, and biochemical evaluation to determine if a nonsingle adenoma (NSA) process could be predicted to impact extent of adrenalectomy. This was a retrospective study reviewing a single-institutional surgical experience at a tertiary academic center from 1993 to 2008, during which 215 patients with primary aldosteronism underwent unilateral adrenalectomy based on imaging of a single adenoma (normal contralateral gland) or AVS ratios. Histology included single adenoma versus NSA; cure was defined as normal immediate postoperative plasma or urine aldosterone level, normal aldosterone:renin ratio, or normotension without antihypertensive medications. Follow-up (mean 13 months, range 0 to 185 months) was available for 167 patients: 132 (79%) single adenoma and 35 (21%) NSA. All 35 patients with NSA and 128 patients (97%) with single adenoma were cured. Imaging studies correctly predicted NSA in 29% and 57% when combined with AVS. Identifying patients with NSA preoperatively was impossible biochemically: mean serum and urinary aldosterone levels and AVS ratios were not different than those of the single adenoma group. Twenty-one percent of patients had NSA, all cured by unilateral adrenalectomy. No preoperative evaluation reliably predicted NSA. Therefore, total unilateral adrenalectomy was safest given the potential for incomplete resection with partial adrenalectomy. Accurate AVS is highly predictive of cure irrespective of the unilateral adrenal histology. Copyright © 2011 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Danielsen, Ragnar; Aspelund, Thor; Harris, Tamara B.; Gudnason, Vilmundur
2016-01-01
Aims To evaluate the prevalence of significant aortic valve stenosis (AS) in a randomly selected study population of elderly individuals representing the general population of Iceland. Furthermore, to predict the number of individuals likely to have severe AS in the future. Methods and results Echocardiography and computed tomography (CT) data from individuals who participated in the AGES-Reykjavik study were used. Echocardiography data from 685 individuals (58% females) aged 67–95 years were available. In both sexes combined, the prevalence for severe AS, defined as an aortic valve area index of <0.6 cm2/m2, in the age groups <70, 70–79 and ≥80 years was 0.92%, 2.4% and 7.3%, respectively. A ROC analysis on the relation between the echocardiography data and the aortic valve calcium score on CT defined a score ≥500 to be indicative of severe AS. Subsequently, in a CT study cohort of 5256 individuals the prevalence of severe AS in the same age groups was 0.80%, 4.0% and 9.5%, respectively. Overall, the prevalence of severe AS by echocardiography and CT in individuals ≥70 years was 4.3% and 5.9%, respectively. A prediction on the number of elderly ≥70 years for the coming decades demonstrated that patients with severe AS will have increased 2.4 fold by the year 2040 and will more than triple by the year 2060. Conclusion This study, in a cohort of elderly individuals representative of the general population in a Nordic country, predicts that AS will be a large health problem in the coming decades. PMID:25171970
Danielsen, Ragnar; Aspelund, Thor; Harris, Tamara B; Gudnason, Vilmundur
2014-10-20
To evaluate the prevalence of significant aortic valve stenosis (AS) in a randomly selected study population of elderly individuals representing the general population of Iceland. Furthermore, to predict the number of individuals likely to have severe AS in the future. Echocardiography and computed tomography (CT) data from individuals who participated in the AGES-Reykjavik study were used. Echocardiography data from 685 individuals (58% females) aged 67-95 years were available. In both sexes combined, the prevalence for severe AS, defined as an aortic valve area index of <0.6 cm2/m2, in the age groups<70, 70-79 and ≥80 years was 0.92%, 2.4% and 7.3%, respectively. A ROC analysis on the relation between the echocardiography data and the aortic valve calcium score on CT defined a score≥500 to be indicative of severe AS. Subsequently, in a CT study cohort of 5256 individuals the prevalence of severe AS in the same age groups was 0.80%, 4.0% and 9.5%, respectively. Overall, the prevalence of severe AS by echocardiography and CT in individuals≥70 years was 4.3% and 5.9%, respectively. A prediction on the number of elderly≥70 years for the coming decades demonstrated that patients with severe AS will have increased 2.4 fold by the year 2040 and will more than triple by the year 2060. This study, in a cohort of elderly individuals representative of the general population in a Nordic country, predicts that AS will be a large health problem in the coming decades. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Kosmidou, Ioanna; Houde-Walter, Haley; Foley, Lori; Michaud, Gregory
2013-04-01
Lesion transmurality is critical to procedural success in radiofrequency catheter ablation. We sought to determine whether loss of pace capture (PC) with high-output unipolar and/or bipolar pacing predicts the formation of uniform transmural lesions. Ten juvenile swine were anaesthetized and prepped under sterile conditions. Seventy-seven isolated radiofrequency applications (RFAs) using a 3.5 mm tip-irrigated catheter were available for analysis. Pace capture was assessed before and after RFA at 10 mA/2 ms and catheter stability verified with a three-dimensional mapping system. Pace capture was defined as 1 : 1 or intermittent local capture per paced beat. Myocardial contact and catheter orientation were assessed using intracardiac echo. Endocardial and epicardial lesion areas were measured after sacrifice using 2,3,5-triphenyltetrazolium chloride staining. A uniform transmural lesion was defined as an epicardial-to-endocardial surface ratio (epi/endo) ≥ 76%. Seventy-four per cent of lesions were transmural and 55.8% of lesions had an epi/endo ratio ≥ 76%. In all, 79.2% of lesions associated with loss of bipolar PC were uniform whereas 20.8% of lesions with loss of bipolar PC were non-uniform (P = 0.006). Loss of bipolar PC was associated with higher mean epicardial/endocardial ratio compared with lesions with persistent PC (P = 0.019). Echocardiographic evidence of optimal catheter contact during RFA improved the predictive accuracy of uniform lesion formation when loss of bipolar PC was noted after RFA. Loss of bipolar PC after RFA is associated with the formation of uniform lesions in atrial tissue. Optimal catheter contact further improves the predictive accuracy associated with loss of PC.
Wang, Jian-jun; Li, Hong-bing; Kinnunen, Leena; Hu, Gang; Järvinen, Tiina M; Miettinen, Maija E; Yuan, Shenyuan; Tuomilehto, Jaakko
2007-05-01
We evaluate the ability of the metabolic syndrome (MetS) defined by five definitions for predicting both incident CHD and diabetes combined, diabetes alone, and CHD alone in a Chinese population. The screening survey for type 2 diabetes was conducted in 1994. A follow-up study of 541 high-risk non-diabetic individuals who were free of CHD at baseline was carried out in 1999 in Beijing area. The MetS was defined by the World Health Organization (WHO), European Group for the Study of Insulin Resistance (EGIR), American College of Endocrinology (ACE), the International Diabetes Federation (IDF), and the National Cholesterol Education Program and the American Heart Association (AHA) (updated NCEP) criteria. From a multiple logistic regression adjusting for age, sex, education, occupation, smoking, family history of diabetes, and total cholesterol, the relative risk of the ACE-defined MetS for incident diabetes alone (67 cases) was 2.29 (95% CI, 1.20-4.34). The MetS defined by the five definitions was associated with a 1.8-3.9 times increased risk for both incident CHD and diabetes combined (59 cases), and with a 1.9-3.0 times for total incident diabetes (126 cases). None of the five definitions predicted either incident CHD alone (177 cases) or total incident CHD (236 cases). In conclusion, the MetS defined by the current definitions appears to be more effective at predicting incident diabetes.
Schistosomes, snails and satellites.
Brooker, S
2002-05-01
This paper gives an overview of the recent progress made in the use and application of geographical information systems (GIS) and remotely sensed (RS) satellite sensor data for the epidemiology and control of schistosomiasis in sub-Saharan Africa. Details are given of the use of GIS to collate, map and analyse available parasitological data. The use of RS data to understand better the broad scale environmental factors influencing schistosome distribution is defined and examples detailed for the prediction of schistosomiasis in unsampled areas. Finally, the current practical application of GIS and remote sensing are reviewed in the context of national control programmes.
Multi-band description of the specific heat and thermodynamic critical field in MgB2 superconductor
NASA Astrophysics Data System (ADS)
Szcześniak, R.; Jarosik, M. W.; Tarasewicz, P.; Durajski, A. P.
2018-05-01
The thermodynamic properties of MgB2 superconductor can be explained using the multi-band models. In the present paper we have examined the experimental data available in literature and we have found out that it is possible to reproduce the measured values of the superconducting energy gaps, the thermodynamic critical magnetic field and specific heat jump within the framework of two-band Eliashberg formalism and appropriate defined free energy difference between superconducting and normal state. Moreover, we found that the obtained results differ significantly from the predictions of the conventional Bardeen-Cooper-Schrieffer theory.
A critical assessment of Mus musculus gene function prediction using integrated genomic evidence
Peña-Castillo, Lourdes; Tasan, Murat; Myers, Chad L; Lee, Hyunju; Joshi, Trupti; Zhang, Chao; Guan, Yuanfang; Leone, Michele; Pagnani, Andrea; Kim, Wan Kyu; Krumpelman, Chase; Tian, Weidong; Obozinski, Guillaume; Qi, Yanjun; Mostafavi, Sara; Lin, Guan Ning; Berriz, Gabriel F; Gibbons, Francis D; Lanckriet, Gert; Qiu, Jian; Grant, Charles; Barutcuoglu, Zafer; Hill, David P; Warde-Farley, David; Grouios, Chris; Ray, Debajyoti; Blake, Judith A; Deng, Minghua; Jordan, Michael I; Noble, William S; Morris, Quaid; Klein-Seetharaman, Judith; Bar-Joseph, Ziv; Chen, Ting; Sun, Fengzhu; Troyanskaya, Olga G; Marcotte, Edward M; Xu, Dong; Hughes, Timothy R; Roth, Frederick P
2008-01-01
Background: Several years after sequencing the human genome and the mouse genome, much remains to be discovered about the functions of most human and mouse genes. Computational prediction of gene function promises to help focus limited experimental resources on the most likely hypotheses. Several algorithms using diverse genomic data have been applied to this task in model organisms; however, the performance of such approaches in mammals has not yet been evaluated. Results: In this study, a standardized collection of mouse functional genomic data was assembled; nine bioinformatics teams used this data set to independently train classifiers and generate predictions of function, as defined by Gene Ontology (GO) terms, for 21,603 mouse genes; and the best performing submissions were combined in a single set of predictions. We identified strengths and weaknesses of current functional genomic data sets and compared the performance of function prediction algorithms. This analysis inferred functions for 76% of mouse genes, including 5,000 currently uncharacterized genes. At a recall rate of 20%, a unified set of predictions averaged 41% precision, with 26% of GO terms achieving a precision better than 90%. Conclusion: We performed a systematic evaluation of diverse, independently developed computational approaches for predicting gene function from heterogeneous data sources in mammals. The results show that currently available data for mammals allows predictions with both breadth and accuracy. Importantly, many highly novel predictions emerge for the 38% of mouse genes that remain uncharacterized. PMID:18613946
Avsec, Žiga; Cheng, Jun; Gagneur, Julien
2018-01-01
Abstract Motivation Regulatory sequences are not solely defined by their nucleic acid sequence but also by their relative distances to genomic landmarks such as transcription start site, exon boundaries or polyadenylation site. Deep learning has become the approach of choice for modeling regulatory sequences because of its strength to learn complex sequence features. However, modeling relative distances to genomic landmarks in deep neural networks has not been addressed. Results Here we developed spline transformation, a neural network module based on splines to flexibly and robustly model distances. Modeling distances to various genomic landmarks with spline transformations significantly increased state-of-the-art prediction accuracy of in vivo RNA-binding protein binding sites for 120 out of 123 proteins. We also developed a deep neural network for human splice branchpoint based on spline transformations that outperformed the current best, already distance-based, machine learning model. Compared to piecewise linear transformation, as obtained by composition of rectified linear units, spline transformation yields higher prediction accuracy as well as faster and more robust training. As spline transformation can be applied to further quantities beyond distances, such as methylation or conservation, we foresee it as a versatile component in the genomics deep learning toolbox. Availability and implementation Spline transformation is implemented as a Keras layer in the CONCISE python package: https://github.com/gagneurlab/concise. Analysis code is available at https://github.com/gagneurlab/Manuscript_Avsec_Bioinformatics_2017. Contact avsec@in.tum.de or gagneur@in.tum.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29155928
Predicting In Vivo Efficacy of Therapeutic Bacteriophages Used To Treat Pulmonary Infections
Henry, Marine; Lavigne, Rob
2013-01-01
The potential of bacteriophage therapy to treat infections caused by antibiotic-resistant bacteria has now been well established using various animal models. While numerous newly isolated bacteriophages have been claimed to be potential therapeutic candidates on the basis of in vitro observations, the parameters used to guide their choice among billions of available bacteriophages are still not clearly defined. We made use of a mouse lung infection model and a bioluminescent strain of Pseudomonas aeruginosa to compare the activities in vitro and in vivo of a set of nine different bacteriophages (PAK_P1, PAK_P2, PAK_P3, PAK_P4, PAK_P5, CHA_P1, LBL3, LUZ19, and PhiKZ). For seven bacteriophages, a good correlation was found between in vitro and in vivo activity. While the remaining two bacteriophages were active in vitro, they were not sufficiently active in vivo under similar conditions to rescue infected animals. Based on the bioluminescence recorded at 2 and 8 h postinfection, we also define for the first time a reliable index to predict treatment efficacy. Our results showed that the bacteriophages isolated directly on the targeted host were the most efficient in vivo, supporting a personalized approach favoring an optimal treatment. PMID:24041900
AFAL: a web service for profiling amino acids surrounding ligands in proteins
NASA Astrophysics Data System (ADS)
Arenas-Salinas, Mauricio; Ortega-Salazar, Samuel; Gonzales-Nilo, Fernando; Pohl, Ehmke; Holmes, David S.; Quatrini, Raquel
2014-11-01
With advancements in crystallographic technology and the increasing wealth of information populating structural databases, there is an increasing need for prediction tools based on spatial information that will support the characterization of proteins and protein-ligand interactions. Herein, a new web service is presented termed amino acid frequency around ligand (AFAL) for determining amino acids type and frequencies surrounding ligands within proteins deposited in the Protein Data Bank and for assessing the atoms and atom-ligand distances involved in each interaction (availability: http://structuralbio.utalca.cl/AFAL/index.html). AFAL allows the user to define a wide variety of filtering criteria (protein family, source organism, resolution, sequence redundancy and distance) in order to uncover trends and evolutionary differences in amino acid preferences that define interactions with particular ligands. Results obtained from AFAL provide valuable statistical information about amino acids that may be responsible for establishing particular ligand-protein interactions. The analysis will enable investigators to compare ligand-binding sites of different proteins and to uncover general as well as specific interaction patterns from existing data. Such patterns can be used subsequently to predict ligand binding in proteins that currently have no structural information and to refine the interpretation of existing protein models. The application of AFAL is illustrated by the analysis of proteins interacting with adenosine-5'-triphosphate.
AFAL: a web service for profiling amino acids surrounding ligands in proteins.
Arenas-Salinas, Mauricio; Ortega-Salazar, Samuel; Gonzales-Nilo, Fernando; Pohl, Ehmke; Holmes, David S; Quatrini, Raquel
2014-11-01
With advancements in crystallographic technology and the increasing wealth of information populating structural databases, there is an increasing need for prediction tools based on spatial information that will support the characterization of proteins and protein-ligand interactions. Herein, a new web service is presented termed amino acid frequency around ligand (AFAL) for determining amino acids type and frequencies surrounding ligands within proteins deposited in the Protein Data Bank and for assessing the atoms and atom-ligand distances involved in each interaction (availability: http://structuralbio.utalca.cl/AFAL/index.html ). AFAL allows the user to define a wide variety of filtering criteria (protein family, source organism, resolution, sequence redundancy and distance) in order to uncover trends and evolutionary differences in amino acid preferences that define interactions with particular ligands. Results obtained from AFAL provide valuable statistical information about amino acids that may be responsible for establishing particular ligand-protein interactions. The analysis will enable investigators to compare ligand-binding sites of different proteins and to uncover general as well as specific interaction patterns from existing data. Such patterns can be used subsequently to predict ligand binding in proteins that currently have no structural information and to refine the interpretation of existing protein models. The application of AFAL is illustrated by the analysis of proteins interacting with adenosine-5'-triphosphate.
Predicting the Rate of Cognitive Decline in Alzheimer Disease: Data From the ICTUS Study.
Canevelli, Marco; Kelaiditi, Eirini; Del Campo, Natalia; Bruno, Giuseppe; Vellas, Bruno; Cesari, Matteo
2016-01-01
Different rates of cognitive progression have been observed among Alzheimer disease (AD) patients. The present study aimed at evaluating whether the rate of cognitive worsening in AD may be predicted by widely available and easy-to-assess factors. Mild to moderate AD patients were recruited in the ICTUS study. Multinomial logistic regression analysis was performed to measure the association between several sociodemographic and clinical variables and 3 different rates of cognitive decline defined by modifications (after 1 year of follow-up) of the Mini Mental State Examination (MMSE) score: (1) "slow" progression, as indicated by a decrease in the MMSE score ≤1 point; (2) "intermediate" progression, decrease in the MMSE score between 2 and 5 points; and (3) "rapid" progression, decrease in the MMSE score ≥6 points. A total of 1005 patients were considered for the present analyses. Overall, most of the study participants (52%) exhibited a slow cognitive course. Higher ADAS-Cog scores at baseline were significantly associated with both "intermediate" and "rapid" decline. Conversely, increasing age was negatively associated with "rapid" cognitive worsening. A slow progression of cognitive decline is common among AD patients. The influence of age and baseline cognitive impairment should always be carefully considered when designing AD trials and defining study populations.
Local backbone structure prediction of proteins
De Brevern, Alexandre G.; Benros, Cristina; Gautier, Romain; Valadié, Hélène; Hazout, Serge; Etchebest, Catherine
2004-01-01
Summary A statistical analysis of the PDB structures has led us to define a new set of small 3D structural prototypes called Protein Blocks (PBs). This structural alphabet includes 16 PBs, each one is defined by the (φ, Ψ) dihedral angles of 5 consecutive residues. The amino acid distributions observed in sequence windows encompassing these PBs are used to predict by a Bayesian approach the local 3D structure of proteins from the sole knowledge of their sequences. LocPred is a software which allows the users to submit a protein sequence and performs a prediction in terms of PBs. The prediction results are given both textually and graphically. PMID:15724288
Using abiotic variables to predict importance of sites for species representation.
Albuquerque, Fabio; Beier, Paul
2015-10-01
In systematic conservation planning, species distribution data for all sites in a planning area are used to prioritize each site in terms of the site's importance toward meeting the goal of species representation. But comprehensive species data are not available in most planning areas and would be expensive to acquire. As a shortcut, ecologists use surrogates, such as occurrences of birds or another well-surveyed taxon, or land types defined from remotely sensed data, in the hope that sites that represent the surrogates also represent biodiversity. Unfortunately, surrogates have not performed reliably. We propose a new type of surrogate, predicted importance, that can be developed from species data for a q% subset of sites. With species data from this subset of sites, importance can be modeled as a function of abiotic variables available at no charge for all terrestrial areas on Earth. Predicted importance can then be used as a surrogate to prioritize all sites. We tested this surrogate with 8 sets of species data. For each data set, we used a q% subset of sites to model importance as a function of abiotic variables, used the resulting function to predict importance for all sites, and evaluated the number of species in the sites with highest predicted importance. Sites with the highest predicted importance represented species efficiently for all data sets when q = 25% and for 7 of 8 data sets when q = 20%. Predicted importance requires less survey effort than direct selection for species representation and meets representation goals well compared with other surrogates currently in use. This less expensive surrogate may be useful in those areas of the world that need it most, namely tropical regions with the highest biodiversity, greatest biodiversity loss, most severe lack of inventory data, and poorly developed protected area networks. © 2015 Society for Conservation Biology.
Landscape prediction and mapping of game fish biomass, an ecosystem service of Michigan rivers
Esselman, Peter C.; Stevenson, R Jan; Lupi, Frank; Riseng, Catherine M.; Wiley, Michael J.
2015-01-01
The increased integration of ecosystem service concepts into natural resource management places renewed emphasis on prediction and mapping of fish biomass as a major provisioning service of rivers. The goals of this study were to predict and map patterns of fish biomass as a proxy for the availability of catchable fish for anglers in rivers and to identify the strongest landscape constraints on fish productivity. We examined hypotheses about fish responses to total phosphorus (TP), as TP is a growth-limiting nutrient known to cause increases (subsidy response) and/or decreases (stress response) in fish biomass depending on its concentration and the species being considered. Boosted regression trees were used to define nonlinear functions that predicted the standing crops of Brook Trout Salvelinus fontinalis, Brown Trout Salmo trutta, Smallmouth Bass Micropterus dolomieu, panfishes (seven centrarchid species), and Walleye Sander vitreus by using landscape and modeled local-scale predictors. Fitted models were highly significant and explained 22–56% of the variation in validation data sets. Nonlinear and threshold responses were apparent for numerous predictors, including TP concentration, which had significant effects on all except the Walleye fishery. Brook Trout and Smallmouth Bass exhibited both subsidy and stress responses, panfish biomass exhibited a subsidy response only, and Brown Trout exhibited a stress response. Maps of reach-specific standing crop predictions showed patterns of predicted fish biomass that corresponded to spatial patterns in catchment area, water temperature, land cover, and nutrient availability. Maps illustrated predictions of higher trout biomass in coldwater streams draining glacial till in northern Michigan, higher Smallmouth Bass and panfish biomasses in warmwater systems of southern Michigan, and high Walleye biomass in large main-stem rivers throughout the state. Our results allow fisheries managers to examine the biomass potential of streams, describe geographic patterns of fisheries, explore possible nutrient management targets, and identify habitats that are candidates for species management.
NASA Astrophysics Data System (ADS)
Mahoney, D. T.; al Aamery, N. M. H.; Fox, J.
2017-12-01
The authors find that sediment (dis)connectivity has seldom taken precedence within watershed models, and the present study advances this modeling framework and applies the modeling within a bedrock-controlled system. Sediment (dis)connectivity, defined as the detachment and transport of sediment from source to sink between geomorphic zones, is a major control on sediment transport. Given the availability of high resolution geospatial data, coupling sediment connectivity concepts within sediment prediction models offers an approach to simulate sediment sources and pathways within a watershed's sediment cascade. Bedrock controlled catchments are potentially unique due to the presence of rock outcrops causing longitudinal impedance to sediment transport pathways in turn impacting the longitudinal distribution of the energy gradient responsible for conveying sediment. Therefore, the authors were motivated by the need to formulate a sediment transport model that couples sediment (dis)connectivity knowledge to predict sediment flux for bedrock controlled catchments. A watershed-scale sediment transport model was formulated that incorporates sediment (dis)connectivity knowledge collected via field reconnaissance and predicts sediment flux through coupling with the Partheniades equation and sediment continuity model. Sediment (dis)connectivity was formulated by coupling probabilistic upland lateral connectivity prediction with instream longitudinal connectivity assessments via discretization of fluid and sediment pathways. Flux predictions from the upland lateral connectivity model served as an input to the instream longitudinal connectivity model. Disconnectivity in the instream model was simulated via the discretization of stream reaches due to barriers such as bedrock outcroppings and man-made check dams. The model was tested for a bedrock controlled catchment in Kentucky, USA for which extensive historic water and sediment flux data was available. Predicted sediment flux was validated via sediment flux measurements collected by the authors. Watershed configuration and the distribution of lateral and longitudinal impedances to sediment transport were found to have significant influence on sediment connectivity and thus sediment flux.
DemQSAR: predicting human volume of distribution and clearance of drugs
NASA Astrophysics Data System (ADS)
Demir-Kavuk, Ozgur; Bentzien, Jörg; Muegge, Ingo; Knapp, Ernst-Walter
2011-12-01
In silico methods characterizing molecular compounds with respect to pharmacologically relevant properties can accelerate the identification of new drugs and reduce their development costs. Quantitative structure-activity/-property relationship (QSAR/QSPR) correlate structure and physico-chemical properties of molecular compounds with a specific functional activity/property under study. Typically a large number of molecular features are generated for the compounds. In many cases the number of generated features exceeds the number of molecular compounds with known property values that are available for learning. Machine learning methods tend to overfit the training data in such situations, i.e. the method adjusts to very specific features of the training data, which are not characteristic for the considered property. This problem can be alleviated by diminishing the influence of unimportant, redundant or even misleading features. A better strategy is to eliminate such features completely. Ideally, a molecular property can be described by a small number of features that are chemically interpretable. The purpose of the present contribution is to provide a predictive modeling approach, which combines feature generation, feature selection, model building and control of overtraining into a single application called DemQSAR. DemQSAR is used to predict human volume of distribution (VDss) and human clearance (CL). To control overtraining, quadratic and linear regularization terms were employed. A recursive feature selection approach is used to reduce the number of descriptors. The prediction performance is as good as the best predictions reported in the recent literature. The example presented here demonstrates that DemQSAR can generate a model that uses very few features while maintaining high predictive power. A standalone DemQSAR Java application for model building of any user defined property as well as a web interface for the prediction of human VDss and CL is available on the webpage of DemPRED: http://agknapp.chemie.fu-berlin.de/dempred/.
DemQSAR: predicting human volume of distribution and clearance of drugs.
Demir-Kavuk, Ozgur; Bentzien, Jörg; Muegge, Ingo; Knapp, Ernst-Walter
2011-12-01
In silico methods characterizing molecular compounds with respect to pharmacologically relevant properties can accelerate the identification of new drugs and reduce their development costs. Quantitative structure-activity/-property relationship (QSAR/QSPR) correlate structure and physico-chemical properties of molecular compounds with a specific functional activity/property under study. Typically a large number of molecular features are generated for the compounds. In many cases the number of generated features exceeds the number of molecular compounds with known property values that are available for learning. Machine learning methods tend to overfit the training data in such situations, i.e. the method adjusts to very specific features of the training data, which are not characteristic for the considered property. This problem can be alleviated by diminishing the influence of unimportant, redundant or even misleading features. A better strategy is to eliminate such features completely. Ideally, a molecular property can be described by a small number of features that are chemically interpretable. The purpose of the present contribution is to provide a predictive modeling approach, which combines feature generation, feature selection, model building and control of overtraining into a single application called DemQSAR. DemQSAR is used to predict human volume of distribution (VD(ss)) and human clearance (CL). To control overtraining, quadratic and linear regularization terms were employed. A recursive feature selection approach is used to reduce the number of descriptors. The prediction performance is as good as the best predictions reported in the recent literature. The example presented here demonstrates that DemQSAR can generate a model that uses very few features while maintaining high predictive power. A standalone DemQSAR Java application for model building of any user defined property as well as a web interface for the prediction of human VD(ss) and CL is available on the webpage of DemPRED: http://agknapp.chemie.fu-berlin.de/dempred/ .
ERIC Educational Resources Information Center
Bauer, Jack J.; McAdams, Dan P.
2010-01-01
We examine (a) the normative course of eudaimonic well-being in emerging adulthood and (b) whether people's narratives of major life goals might prospectively predict eudaimonic growth 3 years later. We define eudaimonic growth as longitudinal increases in eudaimonic well-being, which we define as the combination of psychosocial maturity and…
Hur, Manhoi; Campbell, Alexis Ann; Almeida-de-Macedo, Marcia; Li, Ling; Ransom, Nick; Jose, Adarsh; Crispin, Matt; Nikolau, Basil J; Wurtele, Eve Syrkin
2013-04-01
Discovering molecular components and their functionality is key to the development of hypotheses concerning the organization and regulation of metabolic networks. The iterative experimental testing of such hypotheses is the trajectory that can ultimately enable accurate computational modelling and prediction of metabolic outcomes. This information can be particularly important for understanding the biology of natural products, whose metabolism itself is often only poorly defined. Here, we describe factors that must be in place to optimize the use of metabolomics in predictive biology. A key to achieving this vision is a collection of accurate time-resolved and spatially defined metabolite abundance data and associated metadata. One formidable challenge associated with metabolite profiling is the complexity and analytical limits associated with comprehensively determining the metabolome of an organism. Further, for metabolomics data to be efficiently used by the research community, it must be curated in publicly available metabolomics databases. Such databases require clear, consistent formats, easy access to data and metadata, data download, and accessible computational tools to integrate genome system-scale datasets. Although transcriptomics and proteomics integrate the linear predictive power of the genome, the metabolome represents the nonlinear, final biochemical products of the genome, which results from the intricate system(s) that regulate genome expression. For example, the relationship of metabolomics data to the metabolic network is confounded by redundant connections between metabolites and gene-products. However, connections among metabolites are predictable through the rules of chemistry. Therefore, enhancing the ability to integrate the metabolome with anchor-points in the transcriptome and proteome will enhance the predictive power of genomics data. We detail a public database repository for metabolomics, tools and approaches for statistical analysis of metabolomics data, and methods for integrating these datasets with transcriptomic data to create hypotheses concerning specialized metabolisms that generate the diversity in natural product chemistry. We discuss the importance of close collaborations among biologists, chemists, computer scientists and statisticians throughout the development of such integrated metabolism-centric databases and software.
Hur, Manhoi; Campbell, Alexis Ann; Almeida-de-Macedo, Marcia; Li, Ling; Ransom, Nick; Jose, Adarsh; Crispin, Matt; Nikolau, Basil J.
2013-01-01
Discovering molecular components and their functionality is key to the development of hypotheses concerning the organization and regulation of metabolic networks. The iterative experimental testing of such hypotheses is the trajectory that can ultimately enable accurate computational modelling and prediction of metabolic outcomes. This information can be particularly important for understanding the biology of natural products, whose metabolism itself is often only poorly defined. Here, we describe factors that must be in place to optimize the use of metabolomics in predictive biology. A key to achieving this vision is a collection of accurate time-resolved and spatially defined metabolite abundance data and associated metadata. One formidable challenge associated with metabolite profiling is the complexity and analytical limits associated with comprehensively determining the metabolome of an organism. Further, for metabolomics data to be efficiently used by the research community, it must be curated in publically available metabolomics databases. Such databases require clear, consistent formats, easy access to data and metadata, data download, and accessible computational tools to integrate genome system-scale datasets. Although transcriptomics and proteomics integrate the linear predictive power of the genome, the metabolome represents the nonlinear, final biochemical products of the genome, which results from the intricate system(s) that regulate genome expression. For example, the relationship of metabolomics data to the metabolic network is confounded by redundant connections between metabolites and gene-products. However, connections among metabolites are predictable through the rules of chemistry. Therefore, enhancing the ability to integrate the metabolome with anchor-points in the transcriptome and proteome will enhance the predictive power of genomics data. We detail a public database repository for metabolomics, tools and approaches for statistical analysis of metabolomics data, and methods for integrating these dataset with transcriptomic data to create hypotheses concerning specialized metabolism that generates the diversity in natural product chemistry. We discuss the importance of close collaborations among biologists, chemists, computer scientists and statisticians throughout the development of such integrated metabolism-centric databases and software. PMID:23447050
CRISPRDetect: A flexible algorithm to define CRISPR arrays.
Biswas, Ambarish; Staals, Raymond H J; Morales, Sergio E; Fineran, Peter C; Brown, Chris M
2016-05-17
CRISPR (clustered regularly interspaced short palindromic repeats) RNAs provide the specificity for noncoding RNA-guided adaptive immune defence systems in prokaryotes. CRISPR arrays consist of repeat sequences separated by specific spacer sequences. CRISPR arrays have previously been identified in a large proportion of prokaryotic genomes. However, currently available detection algorithms do not utilise recently discovered features regarding CRISPR loci. We have developed a new approach to automatically detect, predict and interactively refine CRISPR arrays. It is available as a web program and command line from bioanalysis.otago.ac.nz/CRISPRDetect. CRISPRDetect discovers putative arrays, extends the array by detecting additional variant repeats, corrects the direction of arrays, refines the repeat/spacer boundaries, and annotates different types of sequence variations (e.g. insertion/deletion) in near identical repeats. Due to these features, CRISPRDetect has significant advantages when compared to existing identification tools. As well as further support for small medium and large repeats, CRISPRDetect identified a class of arrays with 'extra-large' repeats in bacteria (repeats 44-50 nt). The CRISPRDetect output is integrated with other analysis tools. Notably, the predicted spacers can be directly utilised by CRISPRTarget to predict targets. CRISPRDetect enables more accurate detection of arrays and spacers and its gff output is suitable for inclusion in genome annotation pipelines and visualisation. It has been used to analyse all complete bacterial and archaeal reference genomes.
Theoretical prediction of Grüneisen parameter for SiO{sub 2}.TiO{sub 2} bulk metallic glasses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Chandra K.; Pandey, Brijesh K., E-mail: bkpmmmec11@gmail.com; Pandey, Anjani K.
2016-05-23
The Grüneisen parameter (γ) is very important to decide the limitations for the prediction of thermoelastic properties of bulk metallic glasses. It can be defined in terms of microscopic and macroscopic parameters of the material in which former is based on vibrational frequencies of atoms in the material while later is closely related to its thermodynamic properties. Different formulation and equation of states are used by the pioneer researchers of this field to predict the true sense of Gruneisen parameter for BMG but for SiO{sub 2}.TiO{sub 2} very few and insufficient information is available till now. In the present workmore » we have tested the validity of two different isothermal EOS viz. Poirrior-Tarantola EOS and Usual-Tait EOS to predict the true value of Gruneisen parameter for SiO{sub 2}.TiO{sub 2} as a function of compression. Using different thermodynamic limitations related to the material constraints and analyzing obtained result it is concluded that the Poirrior-Tarantola EOS gives better numeric values of Grüneisen parameter (γ) for SiO{sub 2}.TiO{sub 2} BMG.« less
Predictors of change in depressive symptoms from preschool to first grade.
Reinfjell, Trude; Kårstad, Silja Berg; Berg-Nielsen, Turid Suzanne; Luby, Joan L; Wichstrøm, Lars
2016-11-01
Children's depressive symptoms in the transition from preschool to school are rarely investigated. We therefore tested whether children's temperament (effortful control and negative affect), social skills, child psychopathology, environmental stressors (life events), parental accuracy of predicting their child's emotion understanding (parental accuracy), parental emotional availability, and parental depression predict changes in depressive symptoms from preschool to first grade. Parents of a community sample of 995 4-year-olds were interviewed using the Preschool Age Psychiatric Assessment. The children and parents were reassessed when the children started first grade (n = 795). The results showed that DSM-5 defined depressive symptoms increased. Child temperamental negative affect and parental depression predicted increased, whereas social skills predicted decreased, depressive symptoms. However, such social skills were only protective among children with low and medium effortful control. Further, high parental accuracy proved protective among children with low effortful control and high negative affect. Thus, interventions that treat parental depression may be important for young children. Children with low effortful control and high negative affect may especially benefit from having parents who accurately perceive their emotional understanding. Efforts to enhance social skills may prove particularly important for children with low or medium effortful control.
Zardi, Enrico Maria; Di Matteo, Francesco Maria; Pacella, Claudio Maurizio; Sanyal, Arun J
2016-01-01
Portal hypertension is a severe syndrome that may derive from pre-sinusoidal, sinusoidal and post-sinusoidal causes. As a consequence, several complications (i.e., ascites, oesophageal varices) may develop. In sinusoidal portal hypertension, hepatic venous pressure gradient (HVPG) is a reliable method for defining the grade of portal pressure, establishing the effectiveness of the treatment and predicting the occurrence of complications; however, some questions exist regarding its ability to discriminate bleeding from nonbleeding varices in cirrhotic patients. Other imaging techniques (transient elastography, endoscopy, endosonography and duplex Doppler sonography) for assessing causes and complications of portal hypertensive syndrome are available and may be valuable for the management of these patients. In this review, we evaluate invasive and non-invasive techniques currently employed to obtain a clinical prediction of deadly complications, such as variceal bleeding in patients affected by sinusoidal portal hypertension, in order to create a diagnostic algorithm to manage them. Again, HVPG appears to be the reference standard to evaluate portal hypertension and monitor the response to treatment, but its ability to predict several complications and support management decisions might be further improved through the diagnostic combination with other imaging techniques. PMID:24328372
Zardi, Enrico Maria; Di Matteo, Francesco Maria; Pacella, Claudio Maurizio; Sanyal, Arun J
2014-02-01
Portal hypertension is a severe syndrome that may derive from pre-sinusoidal, sinusoidal, and post-sinusoidal causes. As a consequence, several complications (i.e. ascites, oesophageal varices) may develop. In sinusoidal portal hypertension, hepatic venous pressure gradient (HVPG) is a reliable method for defining the grade of portal pressure, establishing the effectiveness of the treatment, and predicting the occurrence of complications; however, some questions exist regarding its ability to discriminate bleeding from non-bleeding varices in cirrhotic patients. Other imaging techniques (transient elastography, endoscopy, endosonography, and duplex Doppler sonography) for assessing causes and complications of portal hypertensive syndrome are available and may be valuable for the management of these patients. In this review, we evaluate invasive and non-invasive techniques currently employed to obtain a clinical prediction of deadly complications, such as variceal bleeding in patients affected by sinusoidal portal hypertension, in order to create a diagnostic algorithm to manage them. Again, HVPG appears to be the reference standard to evaluate portal hypertension and monitor the response to treatment, but its ability to predict several complications and support management decisions might be further improved through the diagnostic combination with other imaging techniques.
Elskens, Marc; Vloeberghs, Daniel; Van Elsen, Liesbeth; Baeyens, Willy; Goeyens, Leo
2012-09-15
For reasons of food safety, packaging and food contact materials must be submitted to migration tests. Testing of silicone moulds is often very laborious, since three replicate tests are required to decide about their compliancy. This paper presents a general modelling framework to predict the sample's compliance or non-compliance using results of the first two migration tests. It compares the outcomes of models with multiple continuous predictors with a class of models involving latent and dummy variables. The model's prediction ability was tested using cross and external validations, i.e. model revalidation each time a new measurement set became available. At the overall migration limit of 10 mg dm(-2), the relative uncertainty on a prediction was estimated to be ~10%. Taking the default values for α and β equal to 0.05, the maximum value that can be predicted for sample compliance was therefore 7 mg dm(-2). Beyond this limit the risk for false compliant results increases significantly, and a third migration test should be performed. The result of this latter test defines the sample's compliance or non-compliance. Propositions for compliancy control inspired by the current dioxin control strategy are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ogden, F. L.
2017-12-01
HIgh performance computing and the widespread availabilities of geospatial physiographic and forcing datasets have enabled consideration of flood impact predictions with longer lead times and more detailed spatial descriptions. We are now considering multi-hour flash flood forecast lead times at the subdivision level in so-called hydroblind regions away from the National Hydrography network. However, the computational demands of such models are high, necessitating a nested simulation approach. Research on hyper-resolution hydrologic modeling over the past three decades have illustrated some fundamental limits on predictability that are simultaneously related to runoff generation mechanism(s), antecedent conditions, rates and total amounts of precipitation, discretization of the model domain, and complexity or completeness of the model formulation. This latter point is an acknowledgement that in some ways hydrologic understanding in key areas related to land use, land cover, tillage practices, seasonality, and biological effects has some glaring deficiencies. This presentation represents a review of what is known related to the interacting effects of precipitation amount, model spatial discretization, antecedent conditions, physiographic characteristics and model formulation completeness for runoff predictions. These interactions define a region in multidimensional forcing, parameter and process space where there are in some cases clear limits on predictability, and in other cases diminished uncertainty.
NASA Technical Reports Server (NTRS)
Fehrman, A. L.; Masek, R. V.
1972-01-01
Quantitative estimates of the uncertainty in predicting aerodynamic heating rates for a fully reusable space shuttle system are developed and the impact of these uncertainties on Thermal Protection System (TPS) weight are discussed. The study approach consisted of statistical evaluations of the scatter of heating data on shuttle configurations about state-of-the-art heating prediction methods to define the uncertainty in these heating predictions. The uncertainties were then applied as heating rate increments to the nominal predicted heating rate to define the uncertainty in TPS weight. Separate evaluations were made for the booster and orbiter, for trajectories which included boost through reentry and touchdown. For purposes of analysis, the vehicle configuration is divided into areas in which a given prediction method is expected to apply, and separate uncertainty factors and corresponding uncertainty in TPS weight derived for each area.
Song, Jiangning; Yuan, Zheng; Tan, Hao; Huber, Thomas; Burrage, Kevin
2007-12-01
Disulfide bonds are primary covalent crosslinks between two cysteine residues in proteins that play critical roles in stabilizing the protein structures and are commonly found in extracy-toplasmatic or secreted proteins. In protein folding prediction, the localization of disulfide bonds can greatly reduce the search in conformational space. Therefore, there is a great need to develop computational methods capable of accurately predicting disulfide connectivity patterns in proteins that could have potentially important applications. We have developed a novel method to predict disulfide connectivity patterns from protein primary sequence, using a support vector regression (SVR) approach based on multiple sequence feature vectors and predicted secondary structure by the PSIPRED program. The results indicate that our method could achieve a prediction accuracy of 74.4% and 77.9%, respectively, when averaged on proteins with two to five disulfide bridges using 4-fold cross-validation, measured on the protein and cysteine pair on a well-defined non-homologous dataset. We assessed the effects of different sequence encoding schemes on the prediction performance of disulfide connectivity. It has been shown that the sequence encoding scheme based on multiple sequence feature vectors coupled with predicted secondary structure can significantly improve the prediction accuracy, thus enabling our method to outperform most of other currently available predictors. Our work provides a complementary approach to the current algorithms that should be useful in computationally assigning disulfide connectivity patterns and helps in the annotation of protein sequences generated by large-scale whole-genome projects. The prediction web server and Supplementary Material are accessible at http://foo.maths.uq.edu.au/~huber/disulfide
Diagnostic accuracy of transabdominal ultrasonography for gallbladder polyps: systematic review.
Martin, Erin; Gill, Richdeep; Debru, Estifanos
2018-06-01
Previous research has shown variable but generally poor accuracy of transabdominal ultrasonography in the diagnosis of gallbladder polyps. We performed a systematic review of the literature with the aim of helping surgeons interpret and apply these findings in the preoperative assessment and counselling of their patients. We searched PubMed, MEDLINE and the Cochrane database using the keywords "gallbladder," "polyp," "ultrasound," "pathology" and "diagnosis" for English-language articles published after 1990 with the full-text article available through our institutional subscriptions. Polyps were defined as immobile features that on transabdominal ultrasonography appear to arise from the mucosa and that lack an acoustic shadow, and pseudopolyps were defined as features such as inflammation, hyperplasia, cholesterolosis and adenomyomatosis that convey no risk of malignant transformation. The search returned 1816 articles, which were narrowed down to 14 primary sources involving 15 497 (range 23-13 703) patients who had preoperative transabdominal ultrasonography, underwent cholecystectomy and had postoperative pathology results available. Among the 1259 patients in whom a gallbladder polyp was diagnosed on ultrasonography, 188 polyps were confirmed as true polyps on pathologic examination, and 81 of these were found to be malignant. Of the 14 238 patients for whom a polyp was not seen on ultrasonography, 38 had a true polyp on pathologic examination, none of which were malignant. For true gallbladder polyps, transabdominal ultrasonography had a sensitivity of 83.1%, specificity of 96.3%, positive predictive value of 14.9% (7.0% for malignant polyps) and negative predictive value of 99.7%. Transabdominal ultrasonography has a high false-positive rate (85.1%) for the diagnosis of gallbladder polyps. Further study of alternative imaging modalities and reevaluation of existing management guidelines are warranted.
Rauscher, S; Flamm, C; Mandl, C W; Heinz, F X; Stadler, P F
1997-07-01
The prediction of the complete matrix of base pairing probabilities was applied to the 3' noncoding region (NCR) of flavivirus genomes. This approach identifies not only well-defined secondary structure elements, but also regions of high structural flexibility. Flaviviruses, many of which are important human pathogens, have a common genomic organization, but exhibit a significant degree of RNA sequence diversity in the functionally important 3'-NCR. We demonstrate the presence of secondary structures shared by all flaviviruses, as well as structural features that are characteristic for groups of viruses within the genus reflecting the established classification scheme. The significance of most of the predicted structures is corroborated by compensatory mutations. The availability of infectious clones for several flaviviruses will allow the assessment of these structural elements in processes of the viral life cycle, such as replication and assembly.
The Degradome database: mammalian proteases and diseases of proteolysis.
Quesada, Víctor; Ordóñez, Gonzalo R; Sánchez, Luis M; Puente, Xose S; López-Otín, Carlos
2009-01-01
The degradome is defined as the complete set of proteases present in an organism. The recent availability of whole genomic sequences from multiple organisms has led us to predict the contents of the degradomes of several mammalian species. To ensure the fidelity of these predictions, our methods have included manual curation of individual sequences and, when necessary, direct cloning and sequencing experiments. The results of these studies in human, chimpanzee, mouse and rat have been incorporated into the Degradome database, which can be accessed through a web interface at http://degradome.uniovi.es. The annotations about each individual protease can be retrieved by browsing catalytic classes and families or by searching specific terms. This web site also provides detailed information about genetic diseases of proteolysis, a growing field of great importance for multiple users. Finally, the user can find additional information about protease structures, protease inhibitors, ancillary domains of proteases and differences between mammalian degradomes.
Rajoli, Rajith KR; Back, David J; Rannard, Steve; Meyers, Caren Freel; Flexner, Charles; Owen, Andrew; Siccardi, Marco
2014-01-01
Background and Objectives Antiretrovirals (ARVs) are currently used for the treatment and prevention of HIV infection. Poor adherence and low tolerability of some existing oral formulations can hinder their efficacy. Long-acting (LA) injectable nanoformulations could help address these complications by simplifying ARV administration. The aim of this study is to inform the optimisation of intramuscular LA formulations for eight ARVs through physiologically-based pharmacokinetic (PBPK) modelling. Methods A whole-body PBPK model was constructed using mathematical descriptions of molecular, physiological and anatomical processes defining pharmacokinetics. These models were validated against available clinical data and subsequently used to predict the pharmacokinetics of injectable LA formulations Results The predictions suggest that monthly intramuscular injections are possible for dolutegravir, efavirenz, emtricitabine, raltegravir, rilpivirine and tenofovir provided that technological challenges to control release rate can be addressed. Conclusions These data may help inform the target product profiles for LA ARV reformulation strategies. PMID:25523214
The Degradome database: mammalian proteases and diseases of proteolysis
Quesada, Víctor; Ordóñez, Gonzalo R.; Sánchez, Luis M.; Puente, Xose S.; López-Otín, Carlos
2009-01-01
The degradome is defined as the complete set of proteases present in an organism. The recent availability of whole genomic sequences from multiple organisms has led us to predict the contents of the degradomes of several mammalian species. To ensure the fidelity of these predictions, our methods have included manual curation of individual sequences and, when necessary, direct cloning and sequencing experiments. The results of these studies in human, chimpanzee, mouse and rat have been incorporated into the Degradome database, which can be accessed through a web interface at http://degradome.uniovi.es. The annotations about each individual protease can be retrieved by browsing catalytic classes and families or by searching specific terms. This web site also provides detailed information about genetic diseases of proteolysis, a growing field of great importance for multiple users. Finally, the user can find additional information about protease structures, protease inhibitors, ancillary domains of proteases and differences between mammalian degradomes. PMID:18776217
Hedging Your Bets by Learning Reward Correlations in the Human Brain
Wunderlich, Klaus; Symmonds, Mkael; Bossaerts, Peter; Dolan, Raymond J.
2011-01-01
Summary Human subjects are proficient at tracking the mean and variance of rewards and updating these via prediction errors. Here, we addressed whether humans can also learn about higher-order relationships between distinct environmental outcomes, a defining ecological feature of contexts where multiple sources of rewards are available. By manipulating the degree to which distinct outcomes are correlated, we show that subjects implemented an explicit model-based strategy to learn the associated outcome correlations and were adept in using that information to dynamically adjust their choices in a task that required a minimization of outcome variance. Importantly, the experimentally generated outcome correlations were explicitly represented neuronally in right midinsula with a learning prediction error signal expressed in rostral anterior cingulate cortex. Thus, our data show that the human brain represents higher-order correlation structures between rewards, a core adaptive ability whose immediate benefit is optimized sampling. PMID:21943609
Hierarchy of stability factors in reverse shoulder arthroplasty.
Gutiérrez, Sergio; Keller, Tony S; Levy, Jonathan C; Lee, William E; Luo, Zong-Ping
2008-03-01
Reverse shoulder arthroplasty is being used more frequently to treat irreparable rotator cuff tears in the presence of glenohumeral arthritis and instability. To date, however, design features and functions of reverse shoulder arthroplasty, which may be associated with subluxation and dislocation of these implants, have been poorly understood. We asked: (1) what is the hierarchy of importance of joint compressive force, prosthetic socket depth, and glenosphere size in relation to stability, and (2) is this hierarchy defined by underlying and theoretically predictable joint contact characteristics? We examined the intrinsic stability in terms of the force required to dislocate the humerosocket from the glenosphere of eight commercially available reverse shoulder arthroplasty devices. The hierarchy of factors was led by compressive force followed by socket depth; glenosphere size played a much lesser role in stability of the reverse shoulder arthroplasty device. Similar results were predicted by a mathematical model, suggesting the stability was determined primarily by compressive forces generated by muscles.
Russell, Charlotte; Wearden, Alison J.; Fairclough, Gillian; Emsley, Richard A.; Kyle, Simon D.
2016-01-01
Study Objectives: This study aimed to (1) examine the relationship between subjective and actigraphy-defined sleep, and next-day fatigue in chronic fatigue syndrome (CFS); and (2) investigate the potential mediating role of negative mood on this relationship. We also sought to examine the effect of presleep arousal on perceptions of sleep. Methods: Twenty-seven adults meeting the Oxford criteria for CFS and self-identifying as experiencing sleep difficulties were recruited to take part in a prospective daily diary study, enabling symptom capture in real time over a 6-day period. A paper diary was used to record nightly subjective sleep and presleep arousal. Mood and fatigue symptoms were rated four times each day. Actigraphy was employed to provide objective estimations of sleep duration and continuity. Results: Multilevel modelling revealed that subjective sleep variables, namely sleep quality, efficiency, and perceiving sleep to be unrefreshing, predicted following-day fatigue levels, with poorer subjective sleep related to increased fatigue. Lower subjective sleep efficiency and perceiving sleep as unrefreshing predicted reduced variance in fatigue across the following day. Negative mood on waking partially mediated these relationships. Increased presleep cognitive and somatic arousal predicted self-reported poor sleep. Actigraphy-defined sleep, however, was not found to predict following-day fatigue. Conclusions: For the first time we show that nightly subjective sleep predicts next-day fatigue in CFS and identify important factors driving this relationship. Our data suggest that sleep specific interventions, targeting presleep arousal, perceptions of sleep and negative mood on waking, may improve fatigue in CFS. Citation: Russell C, Wearden AJ, Fairclough G, Emsley RA, Kyle SD. Subjective but not actigraphy-defined sleep predicts next-day fatigue in chronic fatigue syndrome: a prospective daily diary study. SLEEP 2016;39(4):937–944. PMID:26715232
Abadi, Shiran; Yan, Winston X; Amar, David; Mayrose, Itay
2017-10-01
The adaptation of the CRISPR-Cas9 system as a genome editing technique has generated much excitement in recent years owing to its ability to manipulate targeted genes and genomic regions that are complementary to a programmed single guide RNA (sgRNA). However, the efficacy of a specific sgRNA is not uniquely defined by exact sequence homology to the target site, thus unintended off-targets might additionally be cleaved. Current methods for sgRNA design are mainly concerned with predicting off-targets for a given sgRNA using basic sequence features and employ elementary rules for ranking possible sgRNAs. Here, we introduce CRISTA (CRISPR Target Assessment), a novel algorithm within the machine learning framework that determines the propensity of a genomic site to be cleaved by a given sgRNA. We show that the predictions made with CRISTA are more accurate than other available methodologies. We further demonstrate that the occurrence of bulges is not a rare phenomenon and should be accounted for in the prediction process. Beyond predicting cleavage efficiencies, the learning process provides inferences regarding patterns that underlie the mechanism of action of the CRISPR-Cas9 system. We discover that attributes that describe the spatial structure and rigidity of the entire genomic site as well as those surrounding the PAM region are a major component of the prediction capabilities.
Computational prediction of type III and IV secreted effectors in Gram-negative bacteria
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDermott, Jason E.; Corrigan, Abigail L.; Peterson, Elena S.
2011-01-01
In this review, we provide an overview of the methods employed by four recent papers that described novel methods for computational prediction of secreted effectors from type III and IV secretion systems in Gram-negative bacteria. The results of the studies in terms of performance at accurately predicting secreted effectors and similarities found between secretion signals that may reflect biologically relevant features for recognition. We discuss the web-based tools for secreted effector prediction described in these studies and announce the availability of our tool, the SIEVEserver (http://www.biopilot.org). Finally, we assess the accuracy of the three type III effector prediction methods onmore » a small set of proteins not known prior to the development of these tools that we have recently discovered and validated using both experimental and computational approaches. Our comparison shows that all methods use similar approaches and, in general arrive at similar conclusions. We discuss the possibility of an order-dependent motif in the secretion signal, which was a point of disagreement in the studies. Our results show that there may be classes of effectors in which the signal has a loosely defined motif, and others in which secretion is dependent only on compositional biases. Computational prediction of secreted effectors from protein sequences represents an important step toward better understanding the interaction between pathogens and hosts.« less
Notas, George; Bariotakis, Michail; Kalogrias, Vaios; Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias
2015-01-01
Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions.
Andrianaki, Maria; Azariadis, Kalliopi; Kampouri, Errika; Theodoropoulou, Katerina; Lavrentaki, Katerina; Kastrinakis, Stelios; Kampa, Marilena; Agouridakis, Panagiotis; Pirintsos, Stergios; Castanas, Elias
2015-01-01
Severe allergic reactions of unknown etiology,necessitating a hospital visit, have an important impact in the life of affected individuals and impose a major economic burden to societies. The prediction of clinically severe allergic reactions would be of great importance, but current attempts have been limited by the lack of a well-founded applicable methodology and the wide spatiotemporal distribution of allergic reactions. The valid prediction of severe allergies (and especially those needing hospital treatment) in a region, could alert health authorities and implicated individuals to take appropriate preemptive measures. In the present report we have collecterd visits for serious allergic reactions of unknown etiology from two major hospitals in the island of Crete, for two distinct time periods (validation and test sets). We have used the Normalized Difference Vegetation Index (NDVI), a satellite-based, freely available measurement, which is an indicator of live green vegetation at a given geographic area, and a set of meteorological data to develop a model capable of describing and predicting severe allergic reaction frequency. Our analysis has retained NDVI and temperature as accurate identifiers and predictors of increased hospital severe allergic reactions visits. Our approach may contribute towards the development of satellite-based modules, for the prediction of severe allergic reactions in specific, well-defined geographical areas. It could also probably be used for the prediction of other environment related diseases and conditions. PMID:25794106
Magnitude Estimation for Large Earthquakes from Borehole Recordings
NASA Astrophysics Data System (ADS)
Eshaghi, A.; Tiampo, K. F.; Ghofrani, H.; Atkinson, G.
2012-12-01
We present a simple and fast method for magnitude determination technique for earthquake and tsunami early warning systems based on strong ground motion prediction equations (GMPEs) in Japan. This method incorporates borehole strong motion records provided by the Kiban Kyoshin network (KiK-net) stations. We analyzed strong ground motion data from large magnitude earthquakes (5.0 ≤ M ≤ 8.1) with focal depths < 50 km and epicentral distances of up to 400 km from 1996 to 2010. Using both peak ground acceleration (PGA) and peak ground velocity (PGV) we derived GMPEs in Japan. These GMPEs are used as the basis for regional magnitude determination. Predicted magnitudes from PGA values (Mpga) and predicted magnitudes from PGV values (Mpgv) were defined. Mpga and Mpgv strongly correlate with the moment magnitude of the event, provided sufficient records for each event are available. The results show that Mpgv has a smaller standard deviation in comparison to Mpga when compared with the estimated magnitudes and provides a more accurate early assessment of earthquake magnitude. We test this new method to estimate the magnitude of the 2011 Tohoku earthquake and we present the results of this estimation. PGA and PGV from borehole recordings allow us to estimate the magnitude of this event 156 s and 105 s after the earthquake onset, respectively. We demonstrate that the incorporation of borehole strong ground-motion records immediately available after the occurrence of large earthquakes significantly increases the accuracy of earthquake magnitude estimation and the associated improvement in earthquake and tsunami early warning systems performance. Moment magnitude versus predicted magnitude (Mpga and Mpgv).
QSAR Modeling Using Large-Scale Databases: Case Study for HIV-1 Reverse Transcriptase Inhibitors.
Tarasova, Olga A; Urusova, Aleksandra F; Filimonov, Dmitry A; Nicklaus, Marc C; Zakharov, Alexey V; Poroikov, Vladimir V
2015-07-27
Large-scale databases are important sources of training sets for various QSAR modeling approaches. Generally, these databases contain information extracted from different sources. This variety of sources can produce inconsistency in the data, defined as sometimes widely diverging activity results for the same compound against the same target. Because such inconsistency can reduce the accuracy of predictive models built from these data, we are addressing the question of how best to use data from publicly and commercially accessible databases to create accurate and predictive QSAR models. We investigate the suitability of commercially and publicly available databases to QSAR modeling of antiviral activity (HIV-1 reverse transcriptase (RT) inhibition). We present several methods for the creation of modeling (i.e., training and test) sets from two, either commercially or freely available, databases: Thomson Reuters Integrity and ChEMBL. We found that the typical predictivities of QSAR models obtained using these different modeling set compilation methods differ significantly from each other. The best results were obtained using training sets compiled for compounds tested using only one method and material (i.e., a specific type of biological assay). Compound sets aggregated by target only typically yielded poorly predictive models. We discuss the possibility of "mix-and-matching" assay data across aggregating databases such as ChEMBL and Integrity and their current severe limitations for this purpose. One of them is the general lack of complete and semantic/computer-parsable descriptions of assay methodology carried by these databases that would allow one to determine mix-and-matchability of result sets at the assay level.
Genome-scale prediction of proteins with long intrinsically disordered regions.
Peng, Zhenling; Mizianty, Marcin J; Kurgan, Lukasz
2014-01-01
Proteins with long disordered regions (LDRs), defined as having 30 or more consecutive disordered residues, are abundant in eukaryotes, and these regions are recognized as a distinct class of biologically functional domains. LDRs facilitate various cellular functions and are important for target selection in structural genomics. Motivated by the lack of methods that directly predict proteins with LDRs, we designed Super-fast predictor of proteins with Long Intrinsically DisordERed regions (SLIDER). SLIDER utilizes logistic regression that takes an empirically chosen set of numerical features, which consider selected physicochemical properties of amino acids, sequence complexity, and amino acid composition, as its inputs. Empirical tests show that SLIDER offers competitive predictive performance combined with low computational cost. It outperforms, by at least a modest margin, a comprehensive set of modern disorder predictors (that can indirectly predict LDRs) and is 16 times faster compared to the best currently available disorder predictor. Utilizing our time-efficient predictor, we characterized abundance and functional roles of proteins with LDRs over 110 eukaryotic proteomes. Similar to related studies, we found that eukaryotes have many (on average 30.3%) proteins with LDRs with majority of proteomes having between 25 and 40%, where higher abundance is characteristic to proteomes that have larger proteins. Our first-of-its-kind large-scale functional analysis shows that these proteins are enriched in a number of cellular functions and processes including certain binding events, regulation of catalytic activities, cellular component organization, biogenesis, biological regulation, and some metabolic and developmental processes. A webserver that implements SLIDER is available at http://biomine.ece.ualberta.ca/SLIDER/. Copyright © 2013 Wiley Periodicals, Inc.
Pascoal, Lívia Maia; Lopes, Marcos Venícios de Oliveira; Chaves, Daniel Bruno Resende; Beltrão, Beatriz Amorim; da Silva, Viviane Martins; Monteiro, Flávia Paula Magalhães
2015-01-01
OBJECTIVE: to analyze the accuracy of the defining characteristics of the Impaired gas exchange nursing diagnosis in children with acute respiratory infection. METHOD: open prospective cohort study conducted with 136 children monitored for a consecutive period of at least six days and not more than ten days. An instrument based on the defining characteristics of the Impaired gas exchange diagnosis and on literature addressing pulmonary assessment was used to collect data. The accuracy means of all the defining characteristics under study were computed. RESULTS: the Impaired gas exchange diagnosis was present in 42.6% of the children in the first assessment. Hypoxemia was the characteristic that presented the best measures of accuracy. Abnormal breathing presented high sensitivity, while restlessness, cyanosis, and abnormal skin color showed high specificity. All the characteristics presented negative predictive values of 70% and cyanosis stood out by its high positive predictive value. CONCLUSION: hypoxemia was the defining characteristic that presented the best predictive ability to determine Impaired gas exchange. Studies of this nature enable nurses to minimize variability in clinical situations presented by the patient and to identify more precisely the nursing diagnosis that represents the patient's true clinical condition. PMID:26155010
Shoreline development and degradation of coastal fish reproduction habitats.
Sundblad, Göran; Bergström, Ulf
2014-12-01
Coastal development has severely affected habitats and biodiversity during the last century, but quantitative estimates of the impacts are usually lacking. We utilize predictive habitat modeling and mapping of human pressures to estimate the cumulative long-term effects of coastal development in relation to fish habitats. Based on aerial photographs since the 1960s, shoreline development rates were estimated in the Stockholm archipelago in the Baltic Sea. By combining shoreline development rates with spatial predictions of fish reproduction habitats, we estimated annual habitat degradation rates for three of the most common coastal fish species, northern pike (Esox lucius), Eurasian perch (Perca fluviatilis) and roach (Rutilus rutilus). The results showed that shoreline constructions were concentrated to the reproduction habitats of these species. The estimated degradation rates, where a degraded habitat was defined as having ≥3 constructions per 100 m shoreline, were on average 0.5 % of available habitats per year and about 1 % in areas close to larger population centers. Approximately 40 % of available habitats were already degraded in 2005. These results provide an example of how many small construction projects over time may have a vast impact on coastal fish populations.
RSRE: RNA structural robustness evaluator
Shu, Wenjie; Zheng, Zhiqiang; Wang, Shengqi
2007-01-01
Biological robustness, defined as the ability to maintain stable functioning in the face of various perturbations, is an important and fundamental topic in current biology, and has become a focus of numerous studies in recent years. Although structural robustness has been explored in several types of RNA molecules, the origins of robustness are still controversial. Computational analysis results are needed to make up for the lack of evidence of robustness in natural biological systems. The RNA structural robustness evaluator (RSRE) web server presented here provides a freely available online tool to quantitatively evaluate the structural robustness of RNA based on the widely accepted definition of neutrality. Several classical structure comparison methods are employed; five randomization methods are implemented to generate control sequences; sub-optimal predicted structures can be optionally utilized to mitigate the uncertainty of secondary structure prediction. With a user-friendly interface, the web application is easy to use. Intuitive illustrations are provided along with the original computational results to facilitate analysis. The RSRE will be helpful in the wide exploration of RNA structural robustness and will catalyze our understanding of RNA evolution. The RSRE web server is freely available at http://biosrv1.bmi.ac.cn/RSRE/ or http://biotech.bmi.ac.cn/RSRE/. PMID:17567615
Computational crystallization.
Altan, Irem; Charbonneau, Patrick; Snell, Edward H
2016-07-15
Crystallization is a key step in macromolecular structure determination by crystallography. While a robust theoretical treatment of the process is available, due to the complexity of the system, the experimental process is still largely one of trial and error. In this article, efforts in the field are discussed together with a theoretical underpinning using a solubility phase diagram. Prior knowledge has been used to develop tools that computationally predict the crystallization outcome and define mutational approaches that enhance the likelihood of crystallization. For the most part these tools are based on binary outcomes (crystal or no crystal), and the full information contained in an assembly of crystallization screening experiments is lost. The potential of this additional information is illustrated by examples where new biological knowledge can be obtained and where a target can be sub-categorized to predict which class of reagents provides the crystallization driving force. Computational analysis of crystallization requires complete and correctly formatted data. While massive crystallization screening efforts are under way, the data available from many of these studies are sparse. The potential for this data and the steps needed to realize this potential are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.
The HSA in Your Future: Defined Contribution Retiree Medical Coverage.
Towarnicky, Jack M
In 2004, when evaluating health savings account (HSA) business opportunities, I predicted: "Twenty-five years ago, no one had ever heard of 401(k); 25 years from now, everyone will have an HSA." Twelve years later, growth in HSA eligibility, participation, contributions and asset accumulations suggests we just might achieve that prediction. This article shares one plan sponsor's journey to help employees accumulate assets to fund medical costs-while employed and after retirement, It documents a 30-plus-year retiree health insurance transition from a defined benefit to a defined dollar structure and culminating in a full-replacement defined contribution structure using HSA-qualifying high-deductible health plans (HDHPs) and then redeploying/repurposing the HSA to incorporate a savings incentive for retiree medical costs.
[Epidemiology of shoulder dystocia].
Deneux-Tharaux, C; Delorme, P
2015-12-01
To synthetize the available evidence regarding the incidence and risk factors of shoulder dystocia (SD). Consultation of the Medline database, and of national guidelines. Shoulder dystocia is defined as a vaginal delivery that requires additional obstetric manoeuvres to deliver the foetus after the head has delivered and gentle traction has failed. With this definition, the incidence of SD in population-based studies is about 0.5-1% of vaginal deliveries. Many risk factors have been described but most associations are not independent, or have not been constantly found. The 2 characteristics consistently found as independent risk factors for SD in the literature are previous SD (incidence of SD of about 10% in parturients with previous SD) and foetal macrosomia. Maternal diabetes and obesity also are associated with a higher risk of SD (2 to 4 folds) but these associations may be completely explained by foetal macrosomia. However, even factors independently and constantly associated with SD do not allow a valid prediction of SD because they are not discriminant; 50 to 70% of SD cases occur in their absence, and the great majority of deliveries when they are present is not associated with SD. Shoulder dystocia is defined by the need for additional obstetric manoeuvres to deliver the foetus after the head has delivered and gentle traction has failed, and complicates 0.5-1% of vaginal deliveries. Its main risk factors are previous SD and macrosomia, but they are poorly predictive. SD remains a non-predictable obstetrics emergency. Knowledge of SD risk factors should increase the vigilance of clinicians in at-risk contexts. Copyright © 2015. Published by Elsevier Masson SAS.
Imposing constraints on parameter values of a conceptual hydrological model using baseflow response
NASA Astrophysics Data System (ADS)
Dunn, S. M.
Calibration of conceptual hydrological models is frequently limited by a lack of data about the area that is being studied. The result is that a broad range of parameter values can be identified that will give an equally good calibration to the available observations, usually of stream flow. The use of total stream flow can bias analyses towards interpretation of rapid runoff, whereas water quality issues are more frequently associated with low flow condition. This paper demonstrates how model distinctions between surface an sub-surface runoff can be used to define a likelihood measure based on the sub-surface (or baseflow) response. This helps to provide more information about the model behaviour, constrain the acceptable parameter sets and reduce uncertainty in streamflow prediction. A conceptual model, DIY, is applied to two contrasting catchments in Scotland, the Ythan and the Carron Valley. Parameter ranges and envelopes of prediction are identified using criteria based on total flow efficiency, baseflow efficiency and combined efficiencies. The individual parameter ranges derived using the combined efficiency measures still cover relatively wide bands, but are better constrained for the Carron than the Ythan. This reflects the fact that hydrological behaviour in the Carron is dominated by a much flashier surface response than in the Ythan. Hence, the total flow efficiency is more strongly controlled by surface runoff in the Carron and there is a greater contrast with the baseflow efficiency. Comparisons of the predictions using different efficiency measures for the Ythan also suggest that there is a danger of confusing parameter uncertainties with data and model error, if inadequate likelihood measures are defined.
Chen, Sung-Wei; Gau, Susan Shur-Fen; Pikhart, Hynek; Peasey, Anne; Chen, Shih-Tse; Tsai, Ming-Chen
2014-08-01
Work stress, as defined by the Demand-Control-Support (DCS) model and the Effort-Reward Imbalance (ERI) model, has been found to predict risks for depression, anxiety, and substance addictions, but little research is available on work stress and Internet addiction. The aims of this study are to assess whether the DCS and ERI models predict subsequent risks of Internet addiction, and to examine whether these associations might be mediated by depression and anxiety. A longitudinal study was conducted in a sample (N=2,550) of 21-55 year old information technology engineers without Internet addiction. Data collection included questionnaires covering work stress, demographic factors, psychosocial factors, substance addictions, Internet-related factors, depression and anxiety at wave 1, and the Internet Addiction Test (IAT) at wave 2. Ordinal logistic regression was used to assess the associations between work stress and IAT; path analysis was adopted to evaluate potentially mediating roles of depression and anxiety. After 6.2 months of follow-up, 14.0% of subjects became problematic Internet users (IAT 40-69) and 4.1% pathological Internet users (IAT 70-100). Job strain was associated with an increased risk of Internet addiction (odds ratio [OR] of having a higher IAT outcome vs. a lower outcome was 1.53); high work social support reduced the risk of Internet addiction (OR=0.62). High ER ratio (OR=1.61) and high overcommitment (OR=1.68) were associated with increased risks of Internet addiction. Work stress defined by the DCS and ERI models predicted subsequent risks of Internet addiction.
Peng, Mingkai; Li, Bing; Southern, Danielle A; Eastwood, Cathy A; Quan, Hude
2017-01-01
Hospital administrative health data create separate records for each hospital stay of patients. Treating a hospital transfer as a readmission could lead to biased results in health service research. This is a cross-sectional study. We used the hospital discharge abstract database in 2013 from Alberta, Canada. Transfer cases were defined by transfer institution code and were used as the reference standard. Four time gaps between 2 hospitalizations (6, 9, 12, and 24 h) and 2 day gaps between hospitalizations [same day (up to 24 h), ≤1 d (up to 48 h)] were used to identify transfer cases. We compared the sensitivity and positive predictive value (PPV) of 6 definitions across different categories of sex, age, and location of residence. Readmission rates within 30 days were compared after episodes of care were defined at the different time gaps. Among the 6 definitions, sensitivity ranged from 93.3% to 98.7% and PPV ranged from 86.4% to 96%. The time gap of 9 hours had the optimal balance of sensitivity and PPV. The time gaps of same day (up to 24 h) and 9 hours had comparable 30-day readmission rates as the transfer indicator after defining episode of care. We recommend the use of a time gap of 9 hours between 2 hospitalizations to define hospital transfer in inpatient databases. When admission or discharge time is not available in the database, a time gap of same day (up to 24 h) can be used to define hospital transfer.
Jappe, Emma Christine; Kringelum, Jens; Trolle, Thomas; Nielsen, Morten
2018-02-15
Peptides that bind to and are presented by MHC class I and class II molecules collectively make up the immunopeptidome. In the context of vaccine development, an understanding of the immunopeptidome is essential, and much effort has been dedicated to its accurate and cost-effective identification. Current state-of-the-art methods mainly comprise in silico tools for predicting MHC binding, which is strongly correlated with peptide immunogenicity. However, only a small proportion of the peptides that bind to MHC molecules are, in fact, immunogenic, and substantial work has been dedicated to uncovering additional determinants of peptide immunogenicity. In this context, and in light of recent advancements in mass spectrometry (MS), the existence of immunological hotspots has been given new life, inciting the hypothesis that hotspots are associated with MHC class I peptide immunogenicity. We here introduce a precise terminology for defining these hotspots and carry out a systematic analysis of MS and in silico predicted hotspots. We find that hotspots defined from MS data are largely captured by peptide binding predictions, enabling their replication in silico. This leads us to conclude that hotspots, to a great degree, are simply a result of promiscuous HLA binding, which disproves the hypothesis that the identification of hotspots provides novel information in the context of immunogenic peptide prediction. Furthermore, our analyses demonstrate that the signal of ligand processing, although present in the MS data, has very low predictive power to discriminate between MS and in silico defined hotspots. © 2018 John Wiley & Sons Ltd.
Prediction of protein orientation upon immobilization on biological and nonbiological surfaces
NASA Astrophysics Data System (ADS)
Talasaz, Amirali H.; Nemat-Gorgani, Mohsen; Liu, Yang; Ståhl, Patrik; Dutton, Robert W.; Ronaghi, Mostafa; Davis, Ronald W.
2006-10-01
We report on a rapid simulation method for predicting protein orientation on a surface based on electrostatic interactions. New methods for predicting protein immobilization are needed because of the increasing use of biosensors and protein microarrays, two technologies that use protein immobilization onto a solid support, and because the orientation of an immobilized protein is important for its function. The proposed simulation model is based on the premise that the protein interacts with the electric field generated by the surface, and this interaction defines the orientation of attachment. Results of this model are in agreement with experimental observations of immobilization of mitochondrial creatine kinase and type I hexokinase on biological membranes. The advantages of our method are that it can be applied to any protein with a known structure; it does not require modeling of the surface at atomic resolution and can be run relatively quickly on readily available computing resources. Finally, we also propose an orientation of membrane-bound cytochrome c, a protein for which the membrane orientation has not been unequivocally determined. electric double layer | electrostatic simulations | orientation flexibility
Visentin, Andrea; Facco, Monica; Frezzato, Federica; Castelli, Monica; Trimarco, Valentina; Martini, Veronica; Gattazzo, Cristina; Severin, Filippo; Chiodin, Giorgia; Martines, Annalisa; Bonaldi, Laura; Gianesello, Ilaria; Pagnin, Elisa; Boscaro, Elisa; Piazza, Francesco; Zambello, Renato; Semenzato, Gianpietro; Trentin, Livio
2015-10-01
Several prognostic factors have been identified to predict the outcome of patients with chronic lymphocytic leukemia (CLL), but only a few studies analyzed more markers together. Taking advantage of a population of 608 patients, we identified the strongest prognostic markers of survival and, subsequently, in a cohort of 212 patients we integrated data of cytogenetic lesions, IGHV mutational status, and CD38 expression in a new and easy scoring system we called the integrated CLL scoring system (ICSS). ICSS defines 3 groups of risk: (1) low risk (patients with 13q(-) or normal fluorescence in-situ hybridization analysis results, mutated IGHV, and CD38) (2) high risk (all 11q(-) or 17p(-) patients and/or all unmutated IGHV and CD38(+) patients); and (3) intermediate risk (all remaining patients). Using only these 3 already available prognostic factors, we were able to properly redefine patients and better predict the clinical course of the disease. ICSS could become a useful tool for CLL patients' management. Copyright © 2015 Elsevier Inc. All rights reserved.
Natural selection and the evolution of reproductive effort.
Hirshfield, M F; Tinkle, D W
1975-06-01
Reproductive effort is defined as that proportion of the total energy budget of an organism that is devoted to reproductive processes. Reproductive effort at a given age within a species will be selected to maximize reproductive value at that age. Reproductive effort is not directly affected by changes in juvenile survivorship, nor necessarily reduced by an increase in adult survivorship. Selection for high levels of reproductive effort should occur when extrinsic adult mortality is high, in environments with constant juvenile survivorship, and in good years for juvenile survivorship in a variable environment, provided that the quality of the year is predictable by adults. Data necessary to measure reproductive effort and to understand how selection results in different levels of effort between individuals and species are discussed. We make several predictions about the effect of increased resource availability on reproductive effort. The empirical bases for testing these predictions are presently inadequate, and we consider data on energy budgets of organisms in nature to be essential for such test. We also conclude that variance in life table parameters must be known in detail to understand the selective bases of levels of reproductive effort.
Functional Capacity Evaluation & Disability
Chen, Joseph J
2007-01-01
Function, Impairment, and Disability are words in which many physicians have little interest. Most physicians are trained to deal with structure and physiology and not function and disability. The purpose of this article is to address some of the common questions that many physicians have with the use of functional capacity evaluation and disability and also to provide a unifying model that can explain the medical and societal variables in predicting disability. We will first define the functional capacity evaluation (FCE) and explore the different types available as well as their uses. We will review several studies exploring the validity and reliability of the FCE on healthy and chronic pain patients. We will examine the few studies that look into whether an FCE is predictive of return to work and whether an FCE is predictive of disability. In the second half of this article, we will focus on the Assessment of Disability from the origins of the United States Social Security Administration to a bold new concept, the World Health Organization's International Classification of Function, Disability and Health. PMID:17907444
Trolle, Thomas; McMurtrey, Curtis P; Sidney, John; Bardet, Wilfried; Osborn, Sean C; Kaever, Thomas; Sette, Alessandro; Hildebrand, William H; Nielsen, Morten; Peters, Bjoern
2016-02-15
HLA class I-binding predictions are widely used to identify candidate peptide targets of human CD8(+) T cell responses. Many such approaches focus exclusively on a limited range of peptide lengths, typically 9 aa and sometimes 9-10 aa, despite multiple examples of dominant epitopes of other lengths. In this study, we examined whether epitope predictions can be improved by incorporating the natural length distribution of HLA class I ligands. We found that, although different HLA alleles have diverse length-binding preferences, the length profiles of ligands that are naturally presented by these alleles are much more homogeneous. We hypothesized that this is due to a defined length profile of peptides available for HLA binding in the endoplasmic reticulum. Based on this, we created a model of HLA allele-specific ligand length profiles and demonstrate how this model, in combination with HLA-binding predictions, greatly improves comprehensive identification of CD8(+) T cell epitopes. Copyright © 2016 by The American Association of Immunologists, Inc.
Model Predictive Control of the Current Profile and the Internal Energy of DIII-D Plasmas
NASA Astrophysics Data System (ADS)
Lauret, M.; Wehner, W.; Schuster, E.
2015-11-01
For efficient and stable operation of tokamak plasmas it is important that the current density profile and the internal energy are jointly controlled by using the available heating and current-drive (H&CD) sources. The proposed approach is a version of nonlinear model predictive control in which the input set is restricted in size by the possible combinations of the H&CD on/off states. The controller uses real-time predictions over a receding-time horizon of both the current density profile (nonlinear partial differential equation) and the internal energy (nonlinear ordinary differential equation) evolutions. At every time instant the effect of every possible combination of H&CD sources on the current profile and internal energy is evaluated over the chosen time horizon. The combination that leads to the best result, which is assessed by a user-defined cost function, is then applied up until the next time instant. Simulations results based on a control-oriented transport code illustrate the effectiveness of the proposed control method. Supported by the US DOE under DE-FC02-04ER54698 & DE-SC0010661.
Kupczewska-Dobecka, Małgorzata; Jakubowski, Marek; Czerczak, Sławomir
2010-09-01
Our objectives included calculating the permeability coefficient and dermal penetration rates (flux value) for 112 chemicals with occupational exposure limits (OELs) according to the LFER (linear free-energy relationship) model developed using published methods. We also attempted to assign skin notations based on each chemical's molecular structure. There are many studies available where formulae for coefficients of permeability from saturated aqueous solutions (K(p)) have been related to physicochemical characteristics of chemicals. The LFER model is based on the solvation equation, which contains five main descriptors predicted from chemical structure: solute excess molar refractivity, dipolarity/polarisability, summation hydrogen bond acidity and basicity, and the McGowan characteristic volume. Descriptor values, available for about 5000 compounds in the Pharma Algorithms Database were used to calculate permeability coefficients. Dermal penetration rate was estimated as a ratio of permeability coefficient and concentration of chemical in saturated aqueous solution. Finally, estimated dermal penetration rates were used to assign the skin notation to chemicals. Defined critical fluxes defined from the literature were recommended as reference values for skin notation. The application of Abraham descriptors predicted from chemical structure and LFER analysis in calculation of permeability coefficients and flux values for chemicals with OELs was successful. Comparison of calculated K(p) values with data obtained earlier from other models showed that LFER predictions were comparable to those obtained by some previously published models, but the differences were much more significant for others. It seems reasonable to conclude that skin should not be characterised as a simple lipophilic barrier alone. Both lipophilic and polar pathways of permeation exist across the stratum corneum. It is feasible to predict skin notation on the basis of the LFER and other published models; from among 112 chemicals 94 (84%) should have the skin notation in the OEL list based on the LFER calculations. The skin notation had been estimated by other published models for almost 94% of the chemicals. Twenty-nine (25.8%) chemicals were identified to have significant absorption and 65 (58%) the potential for dermal toxicity. We found major differences between alternative published analytical models and their ability to determine whether particular chemicals were potentially dermotoxic. Copyright © 2010 Elsevier B.V. All rights reserved.
A simple prediction tool for inhaled corticosteroid response in asthmatic children.
Wu, Yi-Fan; Su, Ming-Wei; Chiang, Bor-Luen; Yang, Yao-Hsu; Tsai, Ching-Hui; Lee, Yungling L
2017-12-07
Inhaled corticosteroids are recommended as the first-line controller medication for childhood asthma owing to their multiple clinical benefits. However, heterogeneity in the response towards these drugs remains a significant clinical problem. Children aged 5 to 18 years with mild to moderate persistent asthma were recruited into the Taiwanese Consortium of Childhood Asthma Study. Their responses to inhaled corticosteroids were assessed based on their improvements in the asthma control test and peak expiratory flow. The predictors of responsiveness were demographic and clinical features that were available in primary care settings. We have developed a prediction model using logistic regression and have simplified it to formulate a practical tool. We assessed its predictive performance using the area under the receiver operating characteristic curve. Of the 73 asthmatic children with baseline and follow-up outcome measurements for inhaled corticosteroids treatment, 24 (33%) were defined as non-responders. The tool we have developed consisted of three predictors yielding a total score between 0 and 5, which are comprised of the following parameters: the age at physician-diagnosis of asthma, sex, and exhaled nitric oxide. Sensitivity and specificity of the tool for prediction of inhaled corticosteroids non-responsiveness, for a score of 3, were 0.75 and 0.69, respectively. The areas under the receiver operating characteristic curve for the prediction tool was 0.763. Our prediction tool represents a simple and low-cost method for predicting the response of inhaled corticosteroids treatment in asthmatic children.
Gene function prediction based on Gene Ontology Hierarchy Preserving Hashing.
Zhao, Yingwen; Fu, Guangyuan; Wang, Jun; Guo, Maozu; Yu, Guoxian
2018-02-23
Gene Ontology (GO) uses structured vocabularies (or terms) to describe the molecular functions, biological roles, and cellular locations of gene products in a hierarchical ontology. GO annotations associate genes with GO terms and indicate the given gene products carrying out the biological functions described by the relevant terms. However, predicting correct GO annotations for genes from a massive set of GO terms as defined by GO is a difficult challenge. To combat with this challenge, we introduce a Gene Ontology Hierarchy Preserving Hashing (HPHash) based semantic method for gene function prediction. HPHash firstly measures the taxonomic similarity between GO terms. It then uses a hierarchy preserving hashing technique to keep the hierarchical order between GO terms, and to optimize a series of hashing functions to encode massive GO terms via compact binary codes. After that, HPHash utilizes these hashing functions to project the gene-term association matrix into a low-dimensional one and performs semantic similarity based gene function prediction in the low-dimensional space. Experimental results on three model species (Homo sapiens, Mus musculus and Rattus norvegicus) for interspecies gene function prediction show that HPHash performs better than other related approaches and it is robust to the number of hash functions. In addition, we also take HPHash as a plugin for BLAST based gene function prediction. From the experimental results, HPHash again significantly improves the prediction performance. The codes of HPHash are available at: http://mlda.swu.edu.cn/codes.php?name=HPHash. Copyright © 2018 Elsevier Inc. All rights reserved.
Shape: A 3D Modeling Tool for Astrophysics.
Steffen, Wolfgang; Koning, Nicholas; Wenger, Stephan; Morisset, Christophe; Magnor, Marcus
2011-04-01
We present a flexible interactive 3D morpho-kinematical modeling application for astrophysics. Compared to other systems, our application reduces the restrictions on the physical assumptions, data type, and amount that is required for a reconstruction of an object's morphology. It is one of the first publicly available tools to apply interactive graphics to astrophysical modeling. The tool allows astrophysicists to provide a priori knowledge about the object by interactively defining 3D structural elements. By direct comparison of model prediction with observational data, model parameters can then be automatically optimized to fit the observation. The tool has already been successfully used in a number of astrophysical research projects.
Malietzis, G; Monzon, L; Hand, J; Wasan, H; Leen, E; Abel, M; Muhammad, A; Abel, P
2013-01-01
High-intensity focused ultrasound (HIFU) is a rapidly maturing technology with diverse clinical applications. In the field of oncology, the use of HIFU to non-invasively cause tissue necrosis in a defined target, a technique known as focused ultrasound surgery (FUS), has considerable potential for tumour ablation. In this article, we outline the development and underlying principles of HIFU, overview the limitations and commercially available equipment for FUS, then summarise some of the recent technological advances and experimental clinical trials that we predict will have a positive impact on extending the role of FUS in cancer therapy. PMID:23403455
Small peptide signaling pathways modulating macronutrient utilization in plants.
de Bang, Thomas C; Lay, Katerina S; Scheible, Wolf-Rüdiger; Takahashi, Hideki
2017-10-01
Root system architecture (RSA) and physiological functions define macronutrient uptake efficiency. Small signaling peptides (SSPs), that act in manners similar to hormones, and their cognate receptors transmit signals both locally and systemically. Several SSPs controlling morphological and physiological traits of roots have been identified to be associated with macronutrient uptake. Recent development in plant genome research has provided an avenue toward systems-based identification and prediction of additional SSPs. This review highlights recent studies on SSP pathways important for optimization of macronutrient uptake and provides new insights into the diversity of SSPs regulated in response to changes in macronutrient availabilities. Copyright © 2017 Elsevier Ltd. All rights reserved.
Improving Ms Estimates by Calibrating Variable-Period Magnitude Scales at Regional Distances
2008-09-01
TF), or oblique - slip variations of normal and thrust faults using the Zoback (1992) classification scheme. For normal faults , 2008 Monitoring...between the observed and Ms-predicted Mw have a definable faulting mechanism effect, especially when strike- slip events are compared to those with...between true and Ms-predicted Mw have a definable faulting mechanism effect, especially when strike- slip events are compared to those with other
Larsen, Sadie E; Berenbaum, Howard
2017-01-01
A recent meta-analysis found that DSM-III- and DSM-IV-defined traumas were associated with only slightly higher posttraumatic stress disorder (PTSD) symptoms than nontraumatic stressors. The current study is the first to examine whether DSM-5-defined traumas were associated with higher levels of PTSD than DSM-IV-defined traumas. Further, we examined theoretically relevant event characteristics to determine whether characteristics other than those outlined in the DSM could predict PTSD symptoms. One hundred six women who had experienced a trauma or significant stressor completed questionnaires assessing PTSD, depression, impairment, and event characteristics. Events were rated for whether they qualified as DSM-IV and DSM-5 trauma. There were no significant differences between DSM-IV-defined traumas and stressors. For DSM-5, effect sizes were slightly larger but still nonsignificant (except for significantly higher hyperarousal following traumas vs. stressors). Self-reported fear for one's life significantly predicted PTSD symptoms. Our results indicate that the current DSM-5 definition of trauma, although a slight improvement from DSM-IV, is not highly predictive of who develops PTSD symptoms. Our study also indicates the importance of individual perception of life threat in the prediction of PTSD. © 2017 S. Karger AG, Basel.
Martinot-Peignoux, Michelle; Khiri, Hacène; Leclere, Laurence; Maylin, Sarah; Marcellin, Patrick; Halfon, Philippe
2009-11-01
Early viral monitoring is essential for the management of treatment outcome in patients with chronic hepatitis C. A variety of commercially available assays are now available to quantify HCV-RNA in routine clinical practice. Compare the clinical results of 3 commercially available assays to evaluate the positive predictive value (PPV) and the negative predictive value (NPV) of rapid virological response (RVR) at week 4 and early virological response (EVR) at week 12. 287 patients treated with standard care regimen combination therapy were studied. HCV-RNA values measured at baseline, week 4, week 12 with VERSANT HCV 3.0 Assay (bDNA), and VERSANT HCV-RNA Qualitative Assay (TMA) (bDNA/TMA); COBAS Ampliprep/COBAS/TaqMan (CAP/CTM) and Abbott m2000sp extraction/m2000rt amplification system (ART). RVR was defined as undetectable serum HCV-RNA and EVR as a > OR =2 log decline in baseline viral load (BLV). Median (range) BVLs were: 5.585(2.585-6.816), 5.189(2.792-7.747) and 4.804(2.380-6.580) log(10)IU/ml, with bDNA/TMA, CAP/CTM and ART, respectively (p<0.01); RVR was observed in 22%, 30% and 27% of the patients and PPVs were 97%, 91% and 94% with bDNA/TMA, CAP/CTM and ART, respectively (p=0.317). EVR was observed in 76%, 73% and 67% of the patients and NPVs were 93%, 83% and 79% with bDNA/TMA, CAP/CTM and ART, respectively (p=0.09). Treatment monitoring should include both detection of serum HCV-RNA at week 4 to predict SVR and at week 12 to predict non-SVR. The value of all 3 assays was similar for evaluating RVR or EVR. Because of viral load discrepancies the same assay should be used throughout patient treatment follow-up.
Tsou, Paul M; Daffner, Scott D; Holly, Langston T; Shamie, A Nick; Wang, Jeffrey C
2012-02-10
Multiple factors contribute to the determination for surgical intervention in the setting of cervical spinal injury, yet to date no unified classification system exists that predicts this need. The goals of this study were twofold: to create a comprehensive subaxial cervical spine injury severity numeric scoring model, and to determine the predictive value of this model for the probability of surgical intervention. In a retrospective cohort study of 333 patients, neural impairment, patho-morphology, and available spinal canal sagittal diameter post-injury were selected as injury severity determinants. A common numeric scoring trend was created; smaller values indicated less favorable clinical conditions. Neural impairment was graded from 2-10, patho-morphology scoring ranged from 2-15, and post-injury available canal sagittal diameter (SD) was measured in millimeters at the narrowest point of injury. Logistic regression analysis was performed using the numeric scores to predict the probability for surgical intervention. Complete neurologic deficit was found in 39 patients, partial deficits in 108, root injuries in 19, and 167 were neurologically intact. The pre-injury mean canal SD was 14.6 mm; the post-injury measurement mean was 12.3 mm. The mean patho-morphology score for all patients was 10.9 and the mean neurologic function score was 7.6. There was a statistically significant difference in mean scores for neural impairment, canal SD, and patho-morphology for surgical compared to nonsurgical patients. At the lowest clinical score for each determinant, the probability for surgery was 0.949 for neural impairment, 0.989 for post-injury available canal SD, and 0.971 for patho-morphology. The unit odds ratio for each determinant was 1.73, 1.61, and 1.45, for neural impairment, patho-morphology, and canal SD scores, respectively. The subaxial cervical spine injury severity determinants of neural impairment, patho-morphology, and post-injury available canal SD have well defined probability for surgical intervention when scored separately. Our data showed that each determinant alone could act as a primary predictor for surgical intervention.
Accelerating Adverse Outcome Pathway Development Using ...
The adverse outcome pathway (AOP) concept links molecular perturbations with organism and population-level outcomes to support high-throughput toxicity testing. International efforts are underway to define AOPs and store the information supporting these AOPs in a central knowledgebase, however, this process is currently labor-intensive and time-consuming. Publicly available data sources provide a wealth of information that could be used to define computationally-predicted AOPs (cpAOPs), which could serve as a basis for creating expert-derived AOPs in a much more efficient way. Computational tools for mining large datasets provide the means for extracting and organizing the information captured in these public data sources. Using cpAOPs as a starting point for expert-derived AOPs should accelerate AOP development. Coupling this with tools to coordinate and facilitate the expert development efforts will increase the number and quality of AOPs produced, which should play a key role in advancing the adoption of twenty-first century toxicity testing strategies. This review article describes how effective knowledge management and automated approaches to AOP development can enhance and accelerate the development and use of AOPs. As the principles documented in this review are put into practice, we anticipate that the quality and quantity of AOPs available will increase substantially. This, in turn, will aid in the interpretation of ToxCast and other high-throughput tox
2017-01-01
Background Influenza is a viral respiratory disease capable of causing epidemics that represent a threat to communities worldwide. The rapidly growing availability of electronic “big data” from diagnostic and prediagnostic sources in health care and public health settings permits advance of a new generation of methods for local detection and prediction of winter influenza seasons and influenza pandemics. Objective The aim of this study was to present a method for integrated detection and prediction of influenza virus activity in local settings using electronically available surveillance data and to evaluate its performance by retrospective application on authentic data from a Swedish county. Methods An integrated detection and prediction method was formally defined based on a design rationale for influenza detection and prediction methods adapted for local surveillance. The novel method was retrospectively applied on data from the winter influenza season 2008-09 in a Swedish county (population 445,000). Outcome data represented individuals who met a clinical case definition for influenza (based on International Classification of Diseases version 10 [ICD-10] codes) from an electronic health data repository. Information from calls to a telenursing service in the county was used as syndromic data source. Results The novel integrated detection and prediction method is based on nonmechanistic statistical models and is designed for integration in local health information systems. The method is divided into separate modules for detection and prediction of local influenza virus activity. The function of the detection module is to alert for an upcoming period of increased load of influenza cases on local health care (using influenza-diagnosis data), whereas the function of the prediction module is to predict the timing of the activity peak (using syndromic data) and its intensity (using influenza-diagnosis data). For detection modeling, exponential regression was used based on the assumption that the beginning of a winter influenza season has an exponential growth of infected individuals. For prediction modeling, linear regression was applied on 7-day periods at the time in order to find the peak timing, whereas a derivate of a normal distribution density function was used to find the peak intensity. We found that the integrated detection and prediction method detected the 2008-09 winter influenza season on its starting day (optimal timeliness 0 days), whereas the predicted peak was estimated to occur 7 days ahead of the factual peak and the predicted peak intensity was estimated to be 26% lower than the factual intensity (6.3 compared with 8.5 influenza-diagnosis cases/100,000). Conclusions Our detection and prediction method is one of the first integrated methods specifically designed for local application on influenza data electronically available for surveillance. The performance of the method in a retrospective study indicates that further prospective evaluations of the methods are justified. PMID:28619700
Spreco, Armin; Eriksson, Olle; Dahlström, Örjan; Cowling, Benjamin John; Timpka, Toomas
2017-06-15
Influenza is a viral respiratory disease capable of causing epidemics that represent a threat to communities worldwide. The rapidly growing availability of electronic "big data" from diagnostic and prediagnostic sources in health care and public health settings permits advance of a new generation of methods for local detection and prediction of winter influenza seasons and influenza pandemics. The aim of this study was to present a method for integrated detection and prediction of influenza virus activity in local settings using electronically available surveillance data and to evaluate its performance by retrospective application on authentic data from a Swedish county. An integrated detection and prediction method was formally defined based on a design rationale for influenza detection and prediction methods adapted for local surveillance. The novel method was retrospectively applied on data from the winter influenza season 2008-09 in a Swedish county (population 445,000). Outcome data represented individuals who met a clinical case definition for influenza (based on International Classification of Diseases version 10 [ICD-10] codes) from an electronic health data repository. Information from calls to a telenursing service in the county was used as syndromic data source. The novel integrated detection and prediction method is based on nonmechanistic statistical models and is designed for integration in local health information systems. The method is divided into separate modules for detection and prediction of local influenza virus activity. The function of the detection module is to alert for an upcoming period of increased load of influenza cases on local health care (using influenza-diagnosis data), whereas the function of the prediction module is to predict the timing of the activity peak (using syndromic data) and its intensity (using influenza-diagnosis data). For detection modeling, exponential regression was used based on the assumption that the beginning of a winter influenza season has an exponential growth of infected individuals. For prediction modeling, linear regression was applied on 7-day periods at the time in order to find the peak timing, whereas a derivate of a normal distribution density function was used to find the peak intensity. We found that the integrated detection and prediction method detected the 2008-09 winter influenza season on its starting day (optimal timeliness 0 days), whereas the predicted peak was estimated to occur 7 days ahead of the factual peak and the predicted peak intensity was estimated to be 26% lower than the factual intensity (6.3 compared with 8.5 influenza-diagnosis cases/100,000). Our detection and prediction method is one of the first integrated methods specifically designed for local application on influenza data electronically available for surveillance. The performance of the method in a retrospective study indicates that further prospective evaluations of the methods are justified. ©Armin Spreco, Olle Eriksson, Örjan Dahlström, Benjamin John Cowling, Toomas Timpka. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.06.2017.
Multivariate Regression Analysis of Winter Ozone Events in the Uinta Basin of Eastern Utah, USA
NASA Astrophysics Data System (ADS)
Mansfield, M. L.
2012-12-01
I report on a regression analysis of a number of variables that are involved in the formation of winter ozone in the Uinta Basin of Eastern Utah. One goal of the analysis is to develop a mathematical model capable of predicting the daily maximum ozone concentration from values of a number of independent variables. The dependent variable is the daily maximum ozone concentration at a particular site in the basin. Independent variables are (1) daily lapse rate, (2) daily "basin temperature" (defined below), (3) snow cover, (4) midday solar zenith angle, (5) monthly oil production, (6) monthly gas production, and (7) the number of days since the beginning of a multi-day inversion event. Daily maximum temperature and daily snow cover data are available at ten or fifteen different sites throughout the basin. The daily lapse rate is defined operationally as the slope of the linear least-squares fit to the temperature-altitude plot, and the "basin temperature" is defined as the value assumed by the same least-squares line at an altitude of 1400 m. A multi-day inversion event is defined as a set of consecutive days for which the lapse rate remains positive. The standard deviation in the accuracy of the model is about 10 ppb. The model has been combined with historical climate and oil & gas production data to estimate historical ozone levels.
The cognitive activation theory of stress.
Ursin, Holger; Eriksen, Hege R
2004-06-01
This paper presents a cognitive activation theory of stress (CATS), with a formal system of systematic definitions. The term "stress" is used for four aspects of "stress", stress stimuli, stress experience, the non-specific, general stress response, and experience of the stress response. These four meanings may be measured separately. The stress response is a general alarm in a homeostatic system, producing general and unspecific neurophysiological activation from one level of arousal to more arousal. The stress response occurs whenever there is something missing, for instance a homeostatic imbalance, or a threat to homeostasis and life of the organism. Formally, the alarm occurs when there is a discrepancy between what should be and what is-between the value a variable should have (set value (SV)), and the real value (actual value (AV)) of the same variable. The stress response, therefore, is an essential and necessary physiological response. The unpleasantness of the alarm is no health threat. However, if sustained, the response may lead to illness and disease through established pathophysiological processes ("allostatic load"). The alarm elicits specific behaviors to cope with the situation. The level of alarm depends on expectancy of the outcome of stimuli and the specific responses available for coping. Psychological defense is defined as a distortion of stimulus expectancies. Response outcome expectancies are defined as positive, negative, or none, to the available responses. This offers formal definitions of coping, hopelessness, and helplessness that are easy to operationalize in man and in animals. It is an essential element of CATS that only when coping is defined as positive outcome expectancy does the concept predict relations to health and disease.
A computer program for thermal radiation from gaseous rocket exhuast plumes (GASRAD)
NASA Technical Reports Server (NTRS)
Reardon, J. E.; Lee, Y. C.
1979-01-01
A computer code is presented for predicting incident thermal radiation from defined plume gas properties in either axisymmetric or cylindrical coordinate systems. The radiation model is a statistical band model for exponential line strength distribution with Lorentz/Doppler line shapes for 5 gaseous species (H2O, CO2, CO, HCl and HF) and an appoximate (non-scattering) treatment of carbon particles. The Curtis-Godson approximation is used for inhomogeneous gases, but a subroutine is available for using Young's intuitive derivative method for H2O with Lorentz line shape and exponentially-tailed-inverse line strength distribution. The geometry model provides integration over a hemisphere with up to 6 individually oriented identical axisymmetric plumes, a single 3-D plume, Shading surfaces may be used in any of 7 shapes, and a conical limit may be defined for the plume to set individual line-of-signt limits. Intermediate coordinate systems may specified to simplify input of plumes and shading surfaces.
Framework for making better predictions by directly estimating variables' predictivity.
Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa
2016-12-13
We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the [Formula: see text]-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the [Formula: see text]-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the [Formula: see text]-score on real data to demonstrate the statistic's predictive performance on sample data. We conjecture that using the partition retention and [Formula: see text]-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired.
Framework for making better predictions by directly estimating variables’ predictivity
Chernoff, Herman; Lo, Shaw-Hwa
2016-01-01
We propose approaching prediction from a framework grounded in the theoretical correct prediction rate of a variable set as a parameter of interest. This framework allows us to define a measure of predictivity that enables assessing variable sets for, preferably high, predictivity. We first define the prediction rate for a variable set and consider, and ultimately reject, the naive estimator, a statistic based on the observed sample data, due to its inflated bias for moderate sample size and its sensitivity to noisy useless variables. We demonstrate that the I-score of the PR method of VS yields a relatively unbiased estimate of a parameter that is not sensitive to noisy variables and is a lower bound to the parameter of interest. Thus, the PR method using the I-score provides an effective approach to selecting highly predictive variables. We offer simulations and an application of the I-score on real data to demonstrate the statistic’s predictive performance on sample data. We conjecture that using the partition retention and I-score can aid in finding variable sets with promising prediction rates; however, further research in the avenue of sample-based measures of predictivity is much desired. PMID:27911830
Furushima, Taishi; Miyachi, Motohiko; Iemitsu, Motoyuki; Murakami, Haruka; Kawano, Hiroshi; Gando, Yuko; Kawakami, Ryoko; Sanada, Kiyoshi
2017-08-29
This study aimed to develop and cross-validate prediction equations for estimating appendicular skeletal muscle mass (ASM) and to examine the relationship between sarcopenia defined by the prediction equations and risk factors for cardiovascular diseases (CVD) or osteoporosis in Japanese men and women. Subjects were healthy men and women aged 20-90 years, who were randomly allocated to the following two groups: the development group (D group; 257 men, 913 women) and the cross-validation group (V group; 119 men, 112 women). To develop prediction equations, stepwise multiple regression analyses were performed on data obtained from the D group, using ASM measured by dual-energy X-ray absorptiometry (DXA) as a dependent variable and five easily obtainable measures (age, height, weight, waist circumference, and handgrip strength) as independent variables. When the prediction equations for ASM estimation were applied to the V group, a significant correlation was found between DXA-measured ASM and predicted ASM in both men and women (R 2 = 0.81 and R 2 = 0.72). Our prediction equations had higher R 2 values compared to previously developed equations (R 2 = 0.75-0.59 and R 2 = 0.69-0.40) in both men and women. Moreover, sarcopenia defined by predicted ASM was related to risk factors for osteoporosis and CVD, as well as sarcopenia defined by DXA-measured ASM. In this study, novel prediction equations were developed and cross-validated in Japanese men and women. Our analyses validated the clinical significance of these prediction equations and showed that previously reported equations were not applicable in a Japanese population.
Car ownership and the association between fruit and vegetable availability and diet.
Bodor, J Nicholas; Hutchinson, Paul L; Rose, Donald
2013-12-01
Nearly all research on the food environment and diet has not accounted for car ownership - a potential key modifying factor. This study examined the modifying effect of car ownership on the relationship between neighborhood fruit and vegetable availability and intake. Data on respondents' (n=760) fruit and vegetable intake, car ownership, and demographics came from the 2008 New Orleans Behavioral Risk Factor Surveillance System. Shelf space data on fresh, frozen, and canned fruits and vegetables were collected in 2008 from a random sample of New Orleans stores (n=114). Availability measures were constructed by summing the amount of fruit and vegetable shelf space in all stores within defined distances from respondent households. Regression analyses controlled for demographics and were run separately for respondents with and without a car. Fruit and vegetable availability was positively associated with intake among non-car owners. An additional 100 m of shelf space within 2 km of a residence was predictive of a half-serving/day increase in fruit and vegetable intake. Availability was not associated with intake among car owners. Future research and interventions to increase neighborhood healthy food options should consider car ownership rates in their target areas as an important modifying factor. © 2013.
Masuda, Y; Misztal, I; Tsuruta, S; Legarra, A; Aguilar, I; Lourenco, D A L; Fragomeni, B O; Lawlor, T J
2016-03-01
The objectives of this study were to develop and evaluate an efficient implementation in the computation of the inverse of genomic relationship matrix with the recursion algorithm, called the algorithm for proven and young (APY), in single-step genomic BLUP. We validated genomic predictions for young bulls with more than 500,000 genotyped animals in final score for US Holsteins. Phenotypic data included 11,626,576 final scores on 7,093,380 US Holstein cows, and genotypes were available for 569,404 animals. Daughter deviations for young bulls with no classified daughters in 2009, but at least 30 classified daughters in 2014 were computed using all the phenotypic data. Genomic predictions for the same bulls were calculated with single-step genomic BLUP using phenotypes up to 2009. We calculated the inverse of the genomic relationship matrix GAPY(-1) based on a direct inversion of genomic relationship matrix on a small subset of genotyped animals (core animals) and extended that information to noncore animals by recursion. We tested several sets of core animals including 9,406 bulls with at least 1 classified daughter, 9,406 bulls and 1,052 classified dams of bulls, 9,406 bulls and 7,422 classified cows, and random samples of 5,000 to 30,000 animals. Validation reliability was assessed by the coefficient of determination from regression of daughter deviation on genomic predictions for the predicted young bulls. The reliabilities were 0.39 with 5,000 randomly chosen core animals, 0.45 with the 9,406 bulls, and 7,422 cows as core animals, and 0.44 with the remaining sets. With phenotypes truncated in 2009 and the preconditioned conjugate gradient to solve mixed model equations, the number of rounds to convergence for core animals defined by bulls was 1,343; defined by bulls and cows, 2,066; and defined by 10,000 random animals, at most 1,629. With complete phenotype data, the number of rounds decreased to 858, 1,299, and at most 1,092, respectively. Setting up GAPY(-1) for 569,404 genotyped animals with 10,000 core animals took 1.3h and 57 GB of memory. The validation reliability with APY reaches a plateau when the number of core animals is at least 10,000. Predictions with APY have little differences in reliability among definitions of core animals. Single-step genomic BLUP with APY is applicable to millions of genotyped animals. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A
2016-03-15
The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.
Custodio, Joseph M.; Wu, Chi-Yuan; Benet, Leslie Z.
2008-01-01
The ability to predict drug disposition involves concurrent consideration of many chemical and physiological variables and the effect of food on the rate and extent of availability adds further complexity due to postprandial changes in the gastrointestinal (GI) tract. A system that allows for the assessment of the multivariate interplay occurring following administration of an oral dose, in the presence or absence of meal, would greatly benefit the early stages of drug development. This is particularly true in an era when the majority of new molecular entities are highly permeable, poorly soluble, extensively metabolized compounds (BDDCS Class 2), which present the most complicated relationship in defining the impact of transporters due to the marked effects of transporter-enzyme interplay. This review evaluates the GI luminal environment by taking into account the absorption / transport / elimination interplay and evaluates the physiochemical property issues by taking into account the importance of solubility, permeability and metabolism. We concentrate on the BDDCS and its utility in predicting drug disposition. Furthermore, we focus on the effect of food on the extent of drug availability (F), which appears to follow closely what might be expected if a significant effect of high fat meals is inhibition of transporters. That is, high fat meals and lipidic excipients would be expected to have little effect on F for Class 1 drugs; they would increase F of Class 2 drugs, while decreasing F for Class 3 drugs. PMID:18199522
Maneuver Classification for Aircraft Fault Detection
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.
2003-01-01
Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data provide a reasonable match to known examples of proper operation. In the domain of fault detection in aircraft, identifying all possible faulty and proper operating modes is clearly impossible. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. To develop such a system, we use flight data collected under a controlled test environment, subject to many sources of variability. We explain where our classifier fits into the envisioned fault detection system as well as experiments showing the promise of this classification subsystem.
Classification of Aircraft Maneuvers for Fault Detection
NASA Technical Reports Server (NTRS)
Oza, Nikunj; Tumer, Irem Y.; Tumer, Kagan; Huff, Edward M.; Koga, Dennis (Technical Monitor)
2002-01-01
Automated fault detection is an increasingly important problem in aircraft maintenance and operation. Standard methods of fault detection assume the availability of either data produced during all possible faulty operation modes or a clearly-defined means to determine whether the data provide a reasonable match to known examples of proper operation. In the domain of fault detection in aircraft, the first assumption is unreasonable and the second is difficult to determine. We envision a system for online fault detection in aircraft, one part of which is a classifier that predicts the maneuver being performed by the aircraft as a function of vibration data and other available data. To develop such a system, we use flight data collected under a controlled test environment, subject to many sources of variability. We explain where our classifier fits into the envisioned fault detection system as well as experiments showing the promise of this classification subsystem.
The U.S. Geological Survey Land Remote Sensing Program
,
2007-01-01
The fundamental goals of the U.S. Geological Survey's Land Remote Sens-ing (LRS) Program are to provide the Federal Government and the public with a primary source of remotely sensed data and applications and to be a leader in defining the future of land remote sensing, nationally and internationally. Remotely sensed data provide information that enhance the understand-ing of ecosystems and the capabilities for predicting ecosystem change. The data promote an understanding of the role of the environment and wildlife in human health issues, the requirements for disaster response, the effects of climate variability, and the availability of energy and mineral resources. Also, as land satellite systems acquire global coverage, the program coordinates a network of international receiving stations and users of the data. It is the responsibility of the program to assure that data from land imaging satellites, airborne photography, radar, and other technologies are available to the national and global science communities.
Den Hartog, Emiel A; Havenith, George
2010-01-01
For wearers of protective clothing in radiation environments there are no quantitative guidelines available for the effect of a radiative heat load on heat exchange. Under the European Union funded project ThermProtect an analytical effort was defined to address the issue of radiative heat load while wearing protective clothing. As within the ThermProtect project much information has become available from thermal manikin experiments in thermal radiation environments, these sets of experimental data are used to verify the analytical approach. The analytical approach provided a good prediction of the heat loss in the manikin experiments, 95% of the variance was explained by the model. The model has not yet been validated at high radiative heat loads and neglects some physical properties of the radiation emissivity. Still, the analytical approach provides a pragmatic approach and may be useful for practical implementation in protective clothing standards for moderate thermal radiation environments.
SASS wind ambiguity removal by direct minimization. [Seasat-A satellite scatterometer
NASA Technical Reports Server (NTRS)
Hoffman, R. N.
1982-01-01
An objective analysis procedure is presented which combines Seasat-A satellite scatterometer (SASS) data with other available data on wind speeds by minimizing an objective function of gridded wind speed values. The functions are defined as the loss functions for the SASS velocity data, the forecast, the SASS velocity magnitude data, and conventional wind speed data. Only aliases closest to the analysis were included, and a method for improving the first guess while using a minimization technique and slowly changing the parameters of the problem is introduced. The model is employed to predict the wind field for the North Atlantic on Sept. 10, 1978. Dealiased SASS data is compared with available ship readings, showing good agreement between the SASS dealiased winds and the winds measured at the surface. Expansion of the model to take in low-level cloud measurements, pressure data, and convergence and cloud level data correlations is discussed.
Cell-Free and In Vivo Characterization of Lux, Las, and Rpa Quorum Activation Systems in E. coli.
Halleran, Andrew D; Murray, Richard M
2018-02-16
Synthetic biologists have turned toward quorum systems as a path for building sophisticated microbial consortia that exhibit group decision making. Currently, however, even the most complex consortium circuits rely on only one or two quorum sensing systems, greatly restricting the available design space. High-throughput characterization of available quorum sensing systems is useful for finding compatible sets of systems that are suitable for a defined circuit architecture. Recently, cell-free systems have gained popularity as a test-bed for rapid prototyping of genetic circuitry. We take advantage of the transcription-translation cell-free system to characterize three commonly used Lux-type quorum activators, Lux, Las, and Rpa. We then compare the cell-free characterization to results obtained in vivo. We find significant genetic crosstalk in both the Las and Rpa systems and substantial signal crosstalk in Lux activation. We show that cell-free characterization predicts crosstalk observed in vivo.
Stefan, Sabina; Schorr, Barbara; Lopez-Rolon, Alex; Kolassa, Iris-Tatjana; Shock, Jonathan P; Rosenfelder, Martin; Heck, Suzette; Bender, Andreas
2018-04-17
We applied the following methods to resting-state EEG data from patients with disorders of consciousness (DOC) for consciousness indexing and outcome prediction: microstates, entropy (i.e. approximate, permutation), power in alpha and delta frequency bands, and connectivity (i.e. weighted symbolic mutual information, symbolic transfer entropy, complex network analysis). Patients with unresponsive wakefulness syndrome (UWS) and patients in a minimally conscious state (MCS) were classified into these two categories by fitting and testing a generalised linear model. We aimed subsequently to develop an automated system for outcome prediction in severe DOC by selecting an optimal subset of features using sequential floating forward selection (SFFS). The two outcome categories were defined as UWS or dead, and MCS or emerged from MCS. Percentage of time spent in microstate D in the alpha frequency band performed best at distinguishing MCS from UWS patients. The average clustering coefficient obtained from thresholding beta coherence performed best at predicting outcome. The optimal subset of features selected with SFFS consisted of the frequency of microstate A in the 2-20 Hz frequency band, path length obtained from thresholding alpha coherence, and average path length obtained from thresholding alpha coherence. Combining these features seemed to afford high prediction power. Python and MATLAB toolboxes for the above calculations are freely available under the GNU public license for non-commercial use ( https://qeeg.wordpress.com ).
Patient-specific dosimetric endpoints based treatment plan quality control in radiotherapy.
Song, Ting; Staub, David; Chen, Mingli; Lu, Weiguo; Tian, Zhen; Jia, Xun; Li, Yongbao; Zhou, Linghong; Jiang, Steve B; Gu, Xuejun
2015-11-07
In intensity modulated radiotherapy (IMRT), the optimal plan for each patient is specific due to unique patient anatomy. To achieve such a plan, patient-specific dosimetric goals reflecting each patient's unique anatomy should be defined and adopted in the treatment planning procedure for plan quality control. This study is to develop such a personalized treatment plan quality control tool by predicting patient-specific dosimetric endpoints (DEs). The incorporation of patient specific DEs is realized by a multi-OAR geometry-dosimetry model, capable of predicting optimal DEs based on the individual patient's geometry. The overall quality of a treatment plan is then judged with a numerical treatment plan quality indicator and characterized as optimal or suboptimal. Taking advantage of clinically available prostate volumetric modulated arc therapy (VMAT) treatment plans, we built and evaluated our proposed plan quality control tool. Using our developed tool, six of twenty evaluated plans were identified as sub-optimal plans. After plan re-optimization, these suboptimal plans achieved better OAR dose sparing without sacrificing the PTV coverage, and the dosimetric endpoints of the re-optimized plans agreed well with the model predicted values, which validate the predictability of the proposed tool. In conclusion, the developed tool is able to accurately predict optimally achievable DEs of multiple OARs, identify suboptimal plans, and guide plan optimization. It is a useful tool for achieving patient-specific treatment plan quality control.
Kurgan, Lukasz; Cios, Krzysztof; Chen, Ke
2008-05-01
Protein structure prediction methods provide accurate results when a homologous protein is predicted, while poorer predictions are obtained in the absence of homologous templates. However, some protein chains that share twilight-zone pairwise identity can form similar folds and thus determining structural similarity without the sequence similarity would be desirable for the structure prediction. The folding type of a protein or its domain is defined as the structural class. Current structural class prediction methods that predict the four structural classes defined in SCOP provide up to 63% accuracy for the datasets in which sequence identity of any pair of sequences belongs to the twilight-zone. We propose SCPRED method that improves prediction accuracy for sequences that share twilight-zone pairwise similarity with sequences used for the prediction. SCPRED uses a support vector machine classifier that takes several custom-designed features as its input to predict the structural classes. Based on extensive design that considers over 2300 index-, composition- and physicochemical properties-based features along with features based on the predicted secondary structure and content, the classifier's input includes 8 features based on information extracted from the secondary structure predicted with PSI-PRED and one feature computed from the sequence. Tests performed with datasets of 1673 protein chains, in which any pair of sequences shares twilight-zone similarity, show that SCPRED obtains 80.3% accuracy when predicting the four SCOP-defined structural classes, which is superior when compared with over a dozen recent competing methods that are based on support vector machine, logistic regression, and ensemble of classifiers predictors. The SCPRED can accurately find similar structures for sequences that share low identity with sequence used for the prediction. The high predictive accuracy achieved by SCPRED is attributed to the design of the features, which are capable of separating the structural classes in spite of their low dimensionality. We also demonstrate that the SCPRED's predictions can be successfully used as a post-processing filter to improve performance of modern fold classification methods.
Kurgan, Lukasz; Cios, Krzysztof; Chen, Ke
2008-01-01
Background Protein structure prediction methods provide accurate results when a homologous protein is predicted, while poorer predictions are obtained in the absence of homologous templates. However, some protein chains that share twilight-zone pairwise identity can form similar folds and thus determining structural similarity without the sequence similarity would be desirable for the structure prediction. The folding type of a protein or its domain is defined as the structural class. Current structural class prediction methods that predict the four structural classes defined in SCOP provide up to 63% accuracy for the datasets in which sequence identity of any pair of sequences belongs to the twilight-zone. We propose SCPRED method that improves prediction accuracy for sequences that share twilight-zone pairwise similarity with sequences used for the prediction. Results SCPRED uses a support vector machine classifier that takes several custom-designed features as its input to predict the structural classes. Based on extensive design that considers over 2300 index-, composition- and physicochemical properties-based features along with features based on the predicted secondary structure and content, the classifier's input includes 8 features based on information extracted from the secondary structure predicted with PSI-PRED and one feature computed from the sequence. Tests performed with datasets of 1673 protein chains, in which any pair of sequences shares twilight-zone similarity, show that SCPRED obtains 80.3% accuracy when predicting the four SCOP-defined structural classes, which is superior when compared with over a dozen recent competing methods that are based on support vector machine, logistic regression, and ensemble of classifiers predictors. Conclusion The SCPRED can accurately find similar structures for sequences that share low identity with sequence used for the prediction. The high predictive accuracy achieved by SCPRED is attributed to the design of the features, which are capable of separating the structural classes in spite of their low dimensionality. We also demonstrate that the SCPRED's predictions can be successfully used as a post-processing filter to improve performance of modern fold classification methods. PMID:18452616
Predictive modeling of neuroanatomic structures for brain atrophy detection
NASA Astrophysics Data System (ADS)
Hu, Xintao; Guo, Lei; Nie, Jingxin; Li, Kaiming; Liu, Tianming
2010-03-01
In this paper, we present an approach of predictive modeling of neuroanatomic structures for the detection of brain atrophy based on cross-sectional MRI image. The underlying premise of applying predictive modeling for atrophy detection is that brain atrophy is defined as significant deviation of part of the anatomy from what the remaining normal anatomy predicts for that part. The steps of predictive modeling are as follows. The central cortical surface under consideration is reconstructed from brain tissue map and Regions of Interests (ROI) on it are predicted from other reliable anatomies. The vertex pair-wise distance between the predicted vertex and the true one within the abnormal region is expected to be larger than that of the vertex in normal brain region. Change of white matter/gray matter ratio within a spherical region is used to identify the direction of vertex displacement. In this way, the severity of brain atrophy can be defined quantitatively by the displacements of those vertices. The proposed predictive modeling method has been evaluated by using both simulated atrophies and MRI images of Alzheimer's disease.
Woodin, Sarah A; Hilbish, Thomas J; Helmuth, Brian; Jones, Sierra J; Wethey, David S
2013-09-01
Modeling the biogeographic consequences of climate change requires confidence in model predictions under novel conditions. However, models often fail when extended to new locales, and such instances have been used as evidence of a change in physiological tolerance, that is, a fundamental niche shift. We explore an alternative explanation and propose a method for predicting the likelihood of failure based on physiological performance curves and environmental variance in the original and new environments. We define the transient event margin (TEM) as the gap between energetic performance failure, defined as CTmax, and the upper lethal limit, defined as LTmax. If TEM is large relative to environmental fluctuations, models will likely fail in new locales. If TEM is small relative to environmental fluctuations, models are likely to be robust for new locales, even when mechanism is unknown. Using temperature, we predict when biogeographic models are likely to fail and illustrate this with a case study. We suggest that failure is predictable from an understanding of how climate drives nonlethal physiological responses, but for many species such data have not been collected. Successful biogeographic forecasting thus depends on understanding when the mechanisms limiting distribution of a species will differ among geographic regions, or at different times, resulting in realized niche shifts. TEM allows prediction of the likelihood of such model failure.
THE FUTURE OF TOXICOLOGY-PREDICTIVE TOXICOLOGY: AN EXPANDED VIEW OF CHEMICAL TOXICITY
A chemistry approach to predictive toxicology relies on structure−activity relationship (SAR) modeling to predict biological activity from chemical structure. Such approaches have proven capabilities when applied to well-defined toxicity end points or regions of chemical space. T...
Warth, A
2015-11-01
Tumor diagnostics are based on histomorphology, immunohistochemistry and molecular pathological analysis of mutations, translocations and amplifications which are of diagnostic, prognostic and/or predictive value. In recent decades only histomorphology was used to classify lung cancer as either small (SCLC) or non-small cell lung cancer (NSCLC), although NSCLC was further subdivided in different entities; however, as no specific therapy options were available classification of specific subtypes was not clinically meaningful. This fundamentally changed with the discovery of specific molecular alterations in adenocarcinoma (ADC), e.g. mutations in KRAS, EGFR and BRAF or translocations of the ALK and ROS1 gene loci, which now form the basis of targeted therapies and have led to a significantly improved patient outcome. The diagnostic, prognostic and predictive value of imaging, morphological, immunohistochemical and molecular characteristics as well as their interaction were systematically assessed in a large cohort with available clinical data including patient survival. Specific and sensitive diagnostic markers and marker panels were defined and diagnostic test algorithms for predictive biomarker assessment were optimized. It was demonstrated that the semi-quantitative assessment of ADC growth patterns is a stage-independent predictor of survival and is reproducibly applicable in the routine setting. Specific histomorphological characteristics correlated with computed tomography (CT) imaging features and thus allowed an improved interdisciplinary classification, especially in the preoperative or palliative setting. Moreover, specific molecular characteristics, for example BRAF mutations and the proliferation index (Ki-67) were identified as clinically relevant prognosticators. Comprehensive clinical, morphological, immunohistochemical and molecular assessment of NSCLCs allow an optimized patient stratification. Respective algorithms now form the backbone of the 2015 lung cancer World Health Organization (WHO) classification.
Litzow, Michael A.; Piatt, John F.; Abookire, Alisa A.; Robards, Martin D.
2004-01-01
1. The quality-variability trade-off hypothesis predicts that (i) energy density (kJ g-1) and spatial-temporal variability in abundance are positively correlated in nearshore marine fishes; and (ii) prey selection by a nearshore piscivore, the pigeon guillemot (Cepphus columba Pallas), is negatively affected by variability in abundance. 2. We tested these predictions with data from a 4-year study that measured fish abundance with beach seines and pigeon guillemot prey utilization with visual identification of chick meals. 3. The first prediction was supported. Pearson's correlation showed that fishes with higher energy density were more variable on seasonal (r = 0.71) and annual (r = 0.66) time scales. Higher energy density fishes were also more abundant overall (r = 0.85) and more patchy at a scale of 10s of km (r = 0.77). 4. Prey utilization by pigeon guillemots was strongly non-random. Relative preference, defined as the difference between log-ratio transformed proportions of individual prey taxa in chick diets and beach seine catches, was significantly different from zero for seven of the eight main prey categories. 5. The second prediction was also supported. We used principal component analysis (PCA) to summarize variability in correlated prey characteristics (energy density, availability and variability in abundance). Two PCA scores explained 32% of observed variability in pigeon guillemot prey utilization. Seasonal variability in abundance was negatively weighted by these PCA scores, providing evidence of risk-averse selection. Prey availability, energy density and km-scale variability in abundance were positively weighted. 6. Trophic interactions are known to create variability in resource distribution in other systems. We propose that links between resource quality and the strength of trophic interactions may produce resource quality-variability trade-offs.
Heart rate turbulence predicts ICD-resistant mortality in ischaemic heart disease.
Marynissen, Thomas; Floré, Vincent; Heidbuchel, Hein; Nuyens, Dieter; Ector, Joris; Willems, Rik
2014-07-01
In high-risk patients, implantable cardioverter-defibrillators (ICDs) can convert the mode of death from arrhythmic to pump failure death. Therefore, we introduced the concept of 'ICD-resistant mortality' (IRM), defined as death (a) without previous appropriate ICD intervention (AI), (b) within 1 month after the first AI, or (c) within 1 year after the initial ICD implantation. Implantable cardioverter-defibrillator implantation in patients with a high risk of IRM should be avoided. Implantable cardioverter-defibrillator patients with ischaemic heart disease were included if a digitized 24 h Holter was available pre-implantation. Demographic, electrocardiographic, echocardiographic, and 24 h Holter risk factors were collected at device implantation. The primary endpoint was IRM. Cox regression analyses were used to test the association between predictors and outcome. We included 130 patients, with a mean left ventricular ejection fraction (LVEF) of 33.6 ± 10.3%. During a follow-up of 52 ± 31 months, 33 patients died. There were 21 cases of IRM. Heart rate turbulence (HRT) was the only Holter parameter associated with IRM and total mortality. A higher New York Heart Association (NYHA) class and a lower body mass index were the strongest predictors of IRM. Left ventricular ejection fraction predicted IRM on univariate analysis, and was the strongest predictor of total mortality. The only parameter that predicted AI was non-sustained ventricular tachycardia. Implantable cardioverter-defibrillator implantation based on NYHA class and LVEF leads to selection of patients with a higher risk of IRM and death. Heart rate turbulence may have added value for the identification of poor candidates for ICD therapy. Available Holter parameters seem limited in their ability to predict AI. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2013. For permissions please email: journals.permissions@oup.com.
Ballante, Flavio; Marshall, Garland R
2016-01-25
Molecular docking is a widely used technique in drug design to predict the binding pose of a candidate compound in a defined therapeutic target. Numerous docking protocols are available, each characterized by different search methods and scoring functions, thus providing variable predictive capability on a same ligand-protein system. To validate a docking protocol, it is necessary to determine a priori the ability to reproduce the experimental binding pose (i.e., by determining the docking accuracy (DA)) in order to select the most appropriate docking procedure and thus estimate the rate of success in docking novel compounds. As common docking programs use generally different root-mean-square deviation (RMSD) formulas, scoring functions, and format results, it is both difficult and time-consuming to consistently determine and compare their predictive capabilities in order to identify the best protocol to use for the target of interest and to extrapolate the binding poses (i.e., best-docked (BD), best-cluster (BC), and best-fit (BF) poses) when applying a given docking program over thousands/millions of molecules during virtual screening. To reduce this difficulty, two new procedures called Clusterizer and DockAccessor have been developed and implemented for use with some common and "free-for-academics" programs such as AutoDock4, AutoDock4(Zn), AutoDock Vina, DOCK, MpSDockZn, PLANTS, and Surflex-Dock to automatically extrapolate BD, BC, and BF poses as well as to perform consistent cluster and DA analyses. Clusterizer and DockAccessor (code available over the Internet) represent two novel tools to collect computationally determined poses and detect the most predictive docking approach. Herein an application to human lysine deacetylase (hKDAC) inhibitors is illustrated.
Reliability analysis and initial requirements for FC systems and stacks
NASA Astrophysics Data System (ADS)
Åström, K.; Fontell, E.; Virtanen, S.
In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.
Cui, Xuefeng; Lu, Zhiwu; Wang, Sheng; Jing-Yan Wang, Jim; Gao, Xin
2016-06-15
Protein homology detection, a fundamental problem in computational biology, is an indispensable step toward predicting protein structures and understanding protein functions. Despite the advances in recent decades on sequence alignment, threading and alignment-free methods, protein homology detection remains a challenging open problem. Recently, network methods that try to find transitive paths in the protein structure space demonstrate the importance of incorporating network information of the structure space. Yet, current methods merge the sequence space and the structure space into a single space, and thus introduce inconsistency in combining different sources of information. We present a novel network-based protein homology detection method, CMsearch, based on cross-modal learning. Instead of exploring a single network built from the mixture of sequence and structure space information, CMsearch builds two separate networks to represent the sequence space and the structure space. It then learns sequence-structure correlation by simultaneously taking sequence information, structure information, sequence space information and structure space information into consideration. We tested CMsearch on two challenging tasks, protein homology detection and protein structure prediction, by querying all 8332 PDB40 proteins. Our results demonstrate that CMsearch is insensitive to the similarity metrics used to define the sequence and the structure spaces. By using HMM-HMM alignment as the sequence similarity metric, CMsearch clearly outperforms state-of-the-art homology detection methods and the CASP-winning template-based protein structure prediction methods. Our program is freely available for download from http://sfb.kaust.edu.sa/Pages/Software.aspx : xin.gao@kaust.edu.sa Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
D-MATRIX: A web tool for constructing weight matrix of conserved DNA motifs
Sen, Naresh; Mishra, Manoj; Khan, Feroz; Meena, Abha; Sharma, Ashok
2009-01-01
Despite considerable efforts to date, DNA motif prediction in whole genome remains a challenge for researchers. Currently the genome wide motif prediction tools required either direct pattern sequence (for single motif) or weight matrix (for multiple motifs). Although there are known motif pattern databases and tools for genome level prediction but no tool for weight matrix construction. Considering this, we developed a D-MATRIX tool which predicts the different types of weight matrix based on user defined aligned motif sequence set and motif width. For retrieval of known motif sequences user can access the commonly used databases such as TFD, RegulonDB, DBTBS, Transfac. DMATRIX program uses a simple statistical approach for weight matrix construction, which can be converted into different file formats according to user requirement. It provides the possibility to identify the conserved motifs in the coregulated genes or whole genome. As example, we successfully constructed the weight matrix of LexA transcription factor binding site with the help of known sosbox cisregulatory elements in Deinococcus radiodurans genome. The algorithm is implemented in C-Sharp and wrapped in ASP.Net to maintain a user friendly web interface. DMATRIX tool is accessible through the CIMAP domain network. Availability http://203.190.147.116/dmatrix/ PMID:19759861
Coupling Matched Molecular Pairs with Machine Learning for Virtual Compound Optimization.
Turk, Samo; Merget, Benjamin; Rippmann, Friedrich; Fulle, Simone
2017-12-26
Matched molecular pair (MMP) analyses are widely used in compound optimization projects to gain insights into structure-activity relationships (SAR). The analysis is traditionally done via statistical methods but can also be employed together with machine learning (ML) approaches to extrapolate to novel compounds. The here introduced MMP/ML method combines a fragment-based MMP implementation with different machine learning methods to obtain automated SAR decomposition and prediction. To test the prediction capabilities and model transferability, two different compound optimization scenarios were designed: (1) "new fragments" which occurs when exploring new fragments for a defined compound series and (2) "new static core and transformations" which resembles for instance the identification of a new compound series. Very good results were achieved by all employed machine learning methods especially for the new fragments case, but overall deep neural network models performed best, allowing reliable predictions also for the new static core and transformations scenario, where comprehensive SAR knowledge of the compound series is missing. Furthermore, we show that models trained on all available data have a higher generalizability compared to models trained on focused series and can extend beyond chemical space covered in the training data. Thus, coupling MMP with deep neural networks provides a promising approach to make high quality predictions on various data sets and in different compound optimization scenarios.
Predicted seafloor facies of Central Santa Monica Bay, California
Dartnell, Peter; Gardner, James V.
2004-01-01
Summary -- Mapping surficial seafloor facies (sand, silt, muddy sand, rock, etc.) should be the first step in marine geological studies and is crucial when modeling sediment processes, pollution transport, deciphering tectonics, and defining benthic habitats. This report outlines an empirical technique that predicts the distribution of seafloor facies for a large area offshore Los Angeles, CA using high-resolution bathymetry and co-registered, calibrated backscatter from multibeam echosounders (MBES) correlated to ground-truth sediment samples. The technique uses a series of procedures that involve supervised classification and a hierarchical decision tree classification that are now available in advanced image-analysis software packages. Derivative variance images of both bathymetry and acoustic backscatter are calculated from the MBES data and then used in a hierarchical decision-tree framework to classify the MBES data into areas of rock, gravelly muddy sand, muddy sand, and mud. A quantitative accuracy assessment on the classification results is performed using ground-truth sediment samples. The predicted facies map is also ground-truthed using seafloor photographs and high-resolution sub-bottom seismic-reflection profiles. This Open-File Report contains the predicted seafloor facies map as a georeferenced TIFF image along with the multibeam bathymetry and acoustic backscatter data used in the study as well as an explanation of the empirical classification process.
Venables, Noah C; Sellbom, Martin; Sourander, Andre; Kendler, Kenneth S; Joiner, Thomas E; Drislane, Laura E; Sillanmäki, Lauri; Elonheimo, Henrik; Parkkola, Kai; Multimaki, Petteri; Patrick, Christopher J
2015-04-30
Biobehavioral dispositions can serve as valuable referents for biologically oriented research on core processes with relevance to many psychiatric conditions. The present study examined two such dispositional variables-weak response inhibition (or disinhibition; INH-) and threat sensitivity (or fearfulness; THT+)-as predictors of the serious transdiagnostic problem of suicide risk in two samples: male and female outpatients from a U.S. clinic (N=1078), and a population-based male military cohort from Finland (N=3855). INH- and THT+ were operationalized through scores on scale measures of disinhibition and fear/fearlessness, known to be related to DSM-defined clinical conditions and brain biomarkers. Suicide risk was assessed by clinician ratings (clinic sample) and questionnaires (both samples). Across samples and alternative suicide indices, INH- and THT+ each contributed uniquely to prediction of suicide risk-beyond internalizing and externalizing problems in the case of the clinic sample where diagnostic data were available. Further, in both samples, INH- and THT+ interactively predicted suicide risk, with individuals scoring concurrently high on both dispositions exhibiting markedly augmented risk. Findings demonstrate that dispositional constructs of INH- and THT+ are predictive of suicide risk, and hold potential as referents for biological research on suicidal behavior. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Likelihood of achieving air quality targets under model uncertainties.
Digar, Antara; Cohan, Daniel S; Cox, Dennis D; Kim, Byeong-Uk; Boylan, James W
2011-01-01
Regulatory attainment demonstrations in the United States typically apply a bright-line test to predict whether a control strategy is sufficient to attain an air quality standard. Photochemical models are the best tools available to project future pollutant levels and are a critical part of regulatory attainment demonstrations. However, because photochemical models are uncertain and future meteorology is unknowable, future pollutant levels cannot be predicted perfectly and attainment cannot be guaranteed. This paper introduces a computationally efficient methodology for estimating the likelihood that an emission control strategy will achieve an air quality objective in light of uncertainties in photochemical model input parameters (e.g., uncertain emission and reaction rates, deposition velocities, and boundary conditions). The method incorporates Monte Carlo simulations of a reduced form model representing pollutant-precursor response under parametric uncertainty to probabilistically predict the improvement in air quality due to emission control. The method is applied to recent 8-h ozone attainment modeling for Atlanta, Georgia, to assess the likelihood that additional controls would achieve fixed (well-defined) or flexible (due to meteorological variability and uncertain emission trends) targets of air pollution reduction. The results show that in certain instances ranking of the predicted effectiveness of control strategies may differ between probabilistic and deterministic analyses.
Thorne, Lesley H; Johnston, David W; Urban, Dean L; Tyne, Julian; Bejder, Lars; Baird, Robin W; Yin, Suzanne; Rickards, Susan H; Deakos, Mark H; Mobley, Joseph R; Pack, Adam A; Chapla Hill, Marie
2012-01-01
Predictive habitat models can provide critical information that is necessary in many conservation applications. Using Maximum Entropy modeling, we characterized habitat relationships and generated spatial predictions of spinner dolphin (Stenella longirostris) resting habitat in the main Hawaiian Islands. Spinner dolphins in Hawai'i exhibit predictable daily movements, using inshore bays as resting habitat during daylight hours and foraging in offshore waters at night. There are growing concerns regarding the effects of human activities on spinner dolphins resting in coastal areas. However, the environmental factors that define suitable resting habitat remain unclear and must be assessed and quantified in order to properly address interactions between humans and spinner dolphins. We used a series of dolphin sightings from recent surveys in the main Hawaiian Islands and a suite of environmental variables hypothesized as being important to resting habitat to model spinner dolphin resting habitat. The model performed well in predicting resting habitat and indicated that proximity to deep water foraging areas, depth, the proportion of bays with shallow depths, and rugosity were important predictors of spinner dolphin habitat. Predicted locations of suitable spinner dolphin resting habitat provided in this study indicate areas where future survey efforts should be focused and highlight potential areas of conflict with human activities. This study provides an example of a presence-only habitat model used to inform the management of a species for which patterns of habitat availability are poorly understood.
Inability to predict postpartum hemorrhage: insights from Egyptian intervention data
2011-01-01
Background Knowledge on how well we can predict primary postpartum hemorrhage (PPH) can help policy makers and health providers design current delivery protocols and PPH case management. The purpose of this paper is to identify risk factors and determine predictive probabilities of those risk factors for primary PPH among women expecting singleton vaginal deliveries in Egypt. Methods From a prospective cohort study, 2510 pregnant women were recruited over a six-month period in Egypt in 2004. PPH was defined as blood loss ≥ 500 ml. Measures of blood loss were made every 20 minutes for the first 4 hours after delivery using a calibrated under the buttocks drape. Using all variables available in the patients' charts, we divided them in ante-partum and intra-partum factors. We employed logistic regression to analyze socio-demographic, medical and past obstetric history, and labor and delivery outcomes as potential PPH risk factors. Post-model predicted probabilities were estimated using the identified risk factors. Results We found a total of 93 cases of primary PPH. In multivariate models, ante-partum hemoglobin, history of previous PPH, labor augmentation and prolonged labor were significantly associated with PPH. Post model probability estimates showed that even among women with three or more risk factors, PPH could only be predicted in 10% of the cases. Conclusions The predictive probability of ante-partum and intra-partum risk factors for PPH is very low. Prevention of PPH to all women is highly recommended. PMID:22123123
Thorne, Lesley H.; Johnston, David W.; Urban, Dean L.; Tyne, Julian; Bejder, Lars; Baird, Robin W.; Yin, Suzanne; Rickards, Susan H.; Deakos, Mark H.; Mobley, Joseph R.; Pack, Adam A.; Chapla Hill, Marie
2012-01-01
Predictive habitat models can provide critical information that is necessary in many conservation applications. Using Maximum Entropy modeling, we characterized habitat relationships and generated spatial predictions of spinner dolphin (Stenella longirostris) resting habitat in the main Hawaiian Islands. Spinner dolphins in Hawai'i exhibit predictable daily movements, using inshore bays as resting habitat during daylight hours and foraging in offshore waters at night. There are growing concerns regarding the effects of human activities on spinner dolphins resting in coastal areas. However, the environmental factors that define suitable resting habitat remain unclear and must be assessed and quantified in order to properly address interactions between humans and spinner dolphins. We used a series of dolphin sightings from recent surveys in the main Hawaiian Islands and a suite of environmental variables hypothesized as being important to resting habitat to model spinner dolphin resting habitat. The model performed well in predicting resting habitat and indicated that proximity to deep water foraging areas, depth, the proportion of bays with shallow depths, and rugosity were important predictors of spinner dolphin habitat. Predicted locations of suitable spinner dolphin resting habitat provided in this study indicate areas where future survey efforts should be focused and highlight potential areas of conflict with human activities. This study provides an example of a presence-only habitat model used to inform the management of a species for which patterns of habitat availability are poorly understood. PMID:22937022
Latent feature decompositions for integrative analysis of multi-platform genomic data
Gregory, Karl B.; Momin, Amin A.; Coombes, Kevin R.; Baladandayuthapani, Veerabhadran
2015-01-01
Increased availability of multi-platform genomics data on matched samples has sparked research efforts to discover how diverse molecular features interact both within and between platforms. In addition, simultaneous measurements of genetic and epigenetic characteristics illuminate the roles their complex relationships play in disease progression and outcomes. However, integrative methods for diverse genomics data are faced with the challenges of ultra-high dimensionality and the existence of complex interactions both within and between platforms. We propose a novel modeling framework for integrative analysis based on decompositions of the large number of platform-specific features into a smaller number of latent features. Subsequently we build a predictive model for clinical outcomes accounting for both within- and between-platform interactions based on Bayesian model averaging procedures. Principal components, partial least squares and non-negative matrix factorization as well as sparse counterparts of each are used to define the latent features, and the performance of these decompositions is compared both on real and simulated data. The latent feature interactions are shown to preserve interactions between the original features and not only aid prediction but also allow explicit selection of outcome-related features. The methods are motivated by and applied to, a glioblastoma multiforme dataset from The Cancer Genome Atlas to predict patient survival times integrating gene expression, microRNA, copy number and methylation data. For the glioblastoma data, we find a high concordance between our selected prognostic genes and genes with known associations with glioblastoma. In addition, our model discovers several relevant cross-platform interactions such as copy number variation associated gene dosing and epigenetic regulation through promoter methylation. On simulated data, we show that our proposed method successfully incorporates interactions within and between genomic platforms to aid accurate prediction and variable selection. Our methods perform best when principal components are used to define the latent features. PMID:26146492
2013-10-01
study will recruit wounded warriors with severe extremity trauma, which places them at high risk for heterotopic ossification (HO); bone formation at...involved in HO; 2) to define accurate and practical methods to predict where HO will develop; and 3) to define potential therapies for prevention or...elicit HO. These tools also need to provide effective methods for early diagnosis or risk assessment (prediction) so that therapies for prevention or
Toward automated biochemotype annotation for large compound libraries.
Chen, Xian; Liang, Yizeng; Xu, Jun
2006-08-01
Combinatorial chemistry allows scientists to probe large synthetically accessible chemical space. However, identifying the sub-space which is selectively associated with an interested biological target, is crucial to drug discovery and life sciences. This paper describes a process to automatically annotate biochemotypes of compounds in a library and thus to identify bioactivity related chemotypes (biochemotypes) from a large library of compounds. The process consists of two steps: (1) predicting all possible bioactivities for each compound in a library, and (2) deriving possible biochemotypes based on predictions. The Prediction of Activity Spectra for Substances program (PASS) was used in the first step. In second step, structural similarity and scaffold-hopping technologies are employed. These technologies are used to derive biochemotypes from bioactivity predictions and the corresponding annotated biochemotypes from MDL Drug Data Report (MDDR) database. About a one million (982,889) commercially available compound library (CACL) has been tested using this process. This paper demonstrates the feasibility of automatically annotating biochemotypes for large libraries of compounds. Nevertheless, some issues need to be considered in order to improve the process. First, the prediction accuracy of PASS program has no significant correlation with the number of compounds in a training set. Larger training sets do not necessarily increase the maximal error of prediction (MEP), nor do they increase the hit structural diversity. Smaller training sets do not necessarily decrease MEP, nor do they decrease the hit structural diversity. Second, the success of systematic bioactivity prediction relies on modeling, training data, and the definition of bioactivities (biochemotype ontology). Unfortunately, the biochemotype ontology was not well developed in the PASS program. Consequently, "ill-defined" bioactivities can reduce the quality of predictions. This paper suggests the ways in which the systematic bioactivities prediction program should be improved.
Ryan, J E; Warrier, S K; Lynch, A C; Ramsay, R G; Phillips, W A; Heriot, A G
2016-03-01
Approximately 20% of patients treated with neoadjuvant chemoradiotherapy (nCRT) for locally advanced rectal cancer achieve a pathological complete response (pCR) while the remainder derive the benefit of improved local control and downstaging and a small proportion show a minimal response. The ability to predict which patients will benefit would allow for improved patient stratification directing therapy to those who are likely to achieve a good response, thereby avoiding ineffective treatment in those unlikely to benefit. A systematic review of the English language literature was conducted to identify pathological factors, imaging modalities and molecular factors that predict pCR following chemoradiotherapy. PubMed, MEDLINE and Cochrane Database searches were conducted with the following keywords and MeSH search terms: 'rectal neoplasm', 'response', 'neoadjuvant', 'preoperative chemoradiation', 'tumor response'. After review of title and abstracts, 85 articles addressing the prediction of pCR were selected. Clear methods to predict pCR before chemoradiotherapy have not been defined. Clinical and radiological features of the primary cancer have limited ability to predict response. Molecular profiling holds the greatest potential to predict pCR but adoption of this technology will require greater concordance between cohorts for the biomarkers currently under investigation. At present no robust markers of the prediction of pCR have been identified and the topic remains an area for future research. This review critically evaluates existing literature providing an overview of the methods currently available to predict pCR to nCRT for locally advanced rectal cancer. The review also provides a comprehensive comparison of the accuracy of each modality. Colorectal Disease © 2015 The Association of Coloproctology of Great Britain and Ireland.
Chandra, Subhash; Murali, Arvind; Bansal, Reena; Agarwal, Dipti; Holm, Adrian
2017-01-01
ABSTRACT Background: predicting the development of severe disease has remained a major challenge in management of acute pancreatitis. The Bedside Index for Severity in Acute Pancreatitis (BISAP) is easy to calculate from the data available in the first 24 hours. Here, we performed a systematic review to determine the prognostic accuracy of the BISAP for severe acute pancreatitis (SAP). Methods: major databases of biomedical publications were searched during the first week of October 2015. Two independent reviewers searched records in two phases. Studies that reported prognostic accuracy of the BISAP for SAP from prospective cohorts were included. The pooled area under the receiver operating curve (AUC) was calculated. Results: Twelve studies were included for data-synthesis and methodology quality assessment was performed for 10. All the studies had enrolled consecutive patients, had a broad spectrum of the disease severity, reported explicit interpretation of the predictor, outcome of interest was well defined and had adequate follow-up. Blinded outcome assessment was reported in only one study. The pooled AUC was 0.85 (95% CI 0.80–0.90). There was significant heterogeneity, I 2 86.6%. Studies using revised Atlanta classification in defining SAP had a pooled AUC of 0.92 (95% CI, 0.90–0.95), but heterogeneity persisted, I 2 67%. Subgroup analysis based on rate of SAP (>20% vs <20%) did not eliminate the heterogeneity. Conclusion: the BISAP has very good predictive performance for SAP across different patient population and etiologies. Studies to evaluate the impact of incorporating the BISAP into clinical practice to improve outcome in acute pancreatitis are needed before adoption could be advocated with confidence. PMID:29046745
Zhang, Wangshu; Coba, Marcelo P; Sun, Fengzhu
2016-01-11
Protein domains can be viewed as portable units of biological function that defines the functional properties of proteins. Therefore, if a protein is associated with a disease, protein domains might also be associated and define disease endophenotypes. However, knowledge about such domain-disease relationships is rarely available. Thus, identification of domains associated with human diseases would greatly improve our understanding of the mechanism of human complex diseases and further improve the prevention, diagnosis and treatment of these diseases. Based on phenotypic similarities among diseases, we first group diseases into overlapping modules. We then develop a framework to infer associations between domains and diseases through known relationships between diseases and modules, domains and proteins, as well as proteins and disease modules. Different methods including Association, Maximum likelihood estimation (MLE), Domain-disease pair exclusion analysis (DPEA), Bayesian, and Parsimonious explanation (PE) approaches are developed to predict domain-disease associations. We demonstrate the effectiveness of all the five approaches via a series of validation experiments, and show the robustness of the MLE, Bayesian and PE approaches to the involved parameters. We also study the effects of disease modularization in inferring novel domain-disease associations. Through validation, the AUC (Area Under the operating characteristic Curve) scores for Bayesian, MLE, DPEA, PE, and Association approaches are 0.86, 0.84, 0.83, 0.83 and 0.79, respectively, indicating the usefulness of these approaches for predicting domain-disease relationships. Finally, we choose the Bayesian approach to infer domains associated with two common diseases, Crohn's disease and type 2 diabetes. The Bayesian approach has the best performance for the inference of domain-disease relationships. The predicted landscape between domains and diseases provides a more detailed view about the disease mechanisms.
T-Epitope Designer: A HLA-peptide binding prediction server.
Kangueane, Pandjassarame; Sakharkar, Meena Kishore
2005-05-15
The current challenge in synthetic vaccine design is the development of a methodology to identify and test short antigen peptides as potential T-cell epitopes. Recently, we described a HLA-peptide binding model (using structural properties) capable of predicting peptides binding to any HLA allele. Consequently, we have developed a web server named T-EPITOPE DESIGNER to facilitate HLA-peptide binding prediction. The prediction server is based on a model that defines peptide binding pockets using information gleaned from X-ray crystal structures of HLA-peptide complexes, followed by the estimation of peptide binding to binding pockets. Thus, the prediction server enables the calculation of peptide binding to HLA alleles. This model is superior to many existing methods because of its potential application to any given HLA allele whose sequence is clearly defined. The web server finds potential application in T cell epitope vaccine design. http://www.bioinformation.net/ted/
Izarzugaza, Jose MG; Juan, David; Pons, Carles; Pazos, Florencio; Valencia, Alfonso
2008-01-01
Background It has repeatedly been shown that interacting protein families tend to have similar phylogenetic trees. These similarities can be used to predicting the mapping between two families of interacting proteins (i.e. which proteins from one family interact with which members of the other). The correct mapping will be that which maximizes the similarity between the trees. The two families may eventually comprise orthologs and paralogs, if members of the two families are present in more than one organism. This fact can be exploited to restrict the possible mappings, simply by impeding links between proteins of different organisms. We present here an algorithm to predict the mapping between families of interacting proteins which is able to incorporate information regarding orthologues, or any other assignment of proteins to "classes" that may restrict possible mappings. Results For the first time in methods for predicting mappings, we have tested this new approach on a large number of interacting protein domains in order to statistically assess its performance. The method accurately predicts around 80% in the most favourable cases. We also analysed in detail the results of the method for a well defined case of interacting families, the sensor and kinase components of the Ntr-type two-component system, for which up to 98% of the pairings predicted by the method were correct. Conclusion Based on the well established relationship between tree similarity and interactions we developed a method for predicting the mapping between two interacting families using genomic information alone. The program is available through a web interface. PMID:18215279
Which ante mortem clinical features predict progressive supranuclear palsy pathology?
Respondek, Gesine; Kurz, Carolin; Arzberger, Thomas; Compta, Yaroslau; Englund, Elisabet; Ferguson, Leslie W; Gelpi, Ellen; Giese, Armin; Irwin, David J; Meissner, Wassilios G; Nilsson, Christer; Pantelyat, Alexander; Rajput, Alex; van Swieten, John C; Troakes, Claire; Josephs, Keith A; Lang, Anthony E; Mollenhauer, Brit; Müller, Ulrich; Whitwell, Jennifer L; Antonini, Angelo; Bhatia, Kailash P; Bordelon, Yvette; Corvol, Jean-Christophe; Colosimo, Carlo; Dodel, Richard; Grossman, Murray; Kassubek, Jan; Krismer, Florian; Levin, Johannes; Lorenzl, Stefan; Morris, Huw; Nestor, Peter; Oertel, Wolfgang H; Rabinovici, Gil D; Rowe, James B; van Eimeren, Thilo; Wenning, Gregor K; Boxer, Adam; Golbe, Lawrence I; Litvan, Irene; Stamelou, Maria; Höglinger, Günter U
2017-07-01
Progressive supranuclear palsy (PSP) is a neuropathologically defined disease presenting with a broad spectrum of clinical phenotypes. To identify clinical features and investigations that predict or exclude PSP pathology during life, aiming at an optimization of the clinical diagnostic criteria for PSP. We performed a systematic review of the literature published since 1996 to identify clinical features and investigations that may predict or exclude PSP pathology. We then extracted standardized data from clinical charts of patients with pathologically diagnosed PSP and relevant disease controls and calculated the sensitivity, specificity, and positive predictive value of key clinical features for PSP in this cohort. Of 4166 articles identified by the database inquiry, 269 met predefined standards. The literature review identified clinical features predictive of PSP, including features of the following 4 functional domains: ocular motor dysfunction, postural instability, akinesia, and cognitive dysfunction. No biomarker or genetic feature was found reliably validated to predict definite PSP. High-quality original natural history data were available from 206 patients with pathologically diagnosed PSP and from 231 pathologically diagnosed disease controls (54 corticobasal degeneration, 51 multiple system atrophy with predominant parkinsonism, 53 Parkinson's disease, 73 behavioral variant frontotemporal dementia). We identified clinical features that predicted PSP pathology, including phenotypes other than Richardson's syndrome, with varying sensitivity and specificity. Our results highlight the clinical variability of PSP and the high prevalence of phenotypes other than Richardson's syndrome. The features of variant phenotypes with high specificity and sensitivity should serve to optimize clinical diagnosis of PSP. © 2017 International Parkinson and Movement Disorder Society. © 2017 International Parkinson and Movement Disorder Society.
Jang, Jin-Young; Park, Taesung; Lee, Selyeong; Kim, Yongkang; Lee, Seung Yeoun; Kim, Sun-Whe; Kim, Song-Cheol; Song, Ki-Byung; Yamamoto, Masakazu; Hatori, Takashi; Hirono, Seiko; Satoi, Sohei; Fujii, Tsutomu; Hirano, Satoshi; Hashimoto, Yasushi; Shimizu, Yashuhiro; Choi, Dong Wook; Choi, Seong Ho; Heo, Jin Seok; Motoi, Fuyuhiko; Matsumoto, Ippei; Lee, Woo Jung; Kang, Chang Moo; Han, Ho-Seong; Yoon, Yoo-Seok; Sho, Masayuki; Nagano, Hiroaki; Honda, Goro; Kim, Sang Geol; Yu, Hee Chul; Chung, Jun Chul; Nagakawa, Yuichi; Seo, Hyung Il; Yamaue, Hiroki
2017-12-01
This study evaluated individual risks of malignancy and proposed a nomogram for predicting malignancy of branch duct type intraductal papillary mucinous neoplasms (BD-IPMNs) using the large database for IPMN. Although consensus guidelines list several malignancy predicting factors in patients with BD-IPMN, those variables have different predictability and individual quantitative prediction of malignancy risk is limited. Clinicopathological factors predictive of malignancy were retrospectively analyzed in 2525 patients with biopsy proven BD-IPMN at 22 tertiary hospitals in Korea and Japan. The patients with main duct dilatation >10 mm and inaccurate information were excluded. The study cohort consisted of 2258 patients. Malignant IPMNs were defined as those with high grade dysplasia and associated invasive carcinoma. Of 2258 patients, 986 (43.7%) had low, 443 (19.6%) had intermediate, 398 (17.6%) had high grade dysplasia, and 431 (19.1%) had invasive carcinoma. To construct and validate the nomogram, patients were randomly allocated into training and validation sets, with fixed ratios of benign and malignant lesions. Multiple logistic regression analysis resulted in five variables (cyst size, duct dilatation, mural nodule, serum CA19-9, and CEA) being selected to construct the nomogram. In the validation set, this nomogram showed excellent discrimination power through a 1000 times bootstrapped calibration test. A nomogram predicting malignancy in patients with BD-IPMN was constructed using a logistic regression model. This nomogram may be useful in identifying patients at risk of malignancy and for selecting optimal treatment methods. The nomogram is freely available at http://statgen.snu.ac.kr/software/nomogramIPMN.
ERIC Educational Resources Information Center
Hilton, N. Zoe; Harris, Grant T.
2009-01-01
Prediction effect sizes such as ROC area are important for demonstrating a risk assessment's generalizability and utility. How a study defines recidivism might affect predictive accuracy. Nonrecidivism is problematic when predicting specialized violence (e.g., domestic violence). The present study cross-validates the ability of the Ontario…
A test to evaluate the earthquake prediction algorithm, M8
Healy, John H.; Kossobokov, Vladimir G.; Dewey, James W.
1992-01-01
A test of the algorithm M8 is described. The test is constructed to meet four rules, which we propose to be applicable to the test of any method for earthquake prediction: 1. An earthquake prediction technique should be presented as a well documented, logical algorithm that can be used by investigators without restrictions. 2. The algorithm should be coded in a common programming language and implementable on widely available computer systems. 3. A test of the earthquake prediction technique should involve future predictions with a black box version of the algorithm in which potentially adjustable parameters are fixed in advance. The source of the input data must be defined and ambiguities in these data must be resolved automatically by the algorithm. 4. At least one reasonable null hypothesis should be stated in advance of testing the earthquake prediction method, and it should be stated how this null hypothesis will be used to estimate the statistical significance of the earthquake predictions. The M8 algorithm has successfully predicted several destructive earthquakes, in the sense that the earthquakes occurred inside regions with linear dimensions from 384 to 854 km that the algorithm had identified as being in times of increased probability for strong earthquakes. In addition, M8 has successfully "post predicted" high percentages of strong earthquakes in regions to which it has been applied in retroactive studies. The statistical significance of previous predictions has not been established, however, and post-prediction studies in general are notoriously subject to success-enhancement through hindsight. Nor has it been determined how much more precise an M8 prediction might be than forecasts and probability-of-occurrence estimates made by other techniques. We view our test of M8 both as a means to better determine the effectiveness of M8 and as an experimental structure within which to make observations that might lead to improvements in the algorithm or conceivably lead to a radically different approach to earthquake prediction.
NASA Astrophysics Data System (ADS)
Caldwell, R. L.; Edmonds, D. A.; Baumgardner, S. E.; Paola, C.; Roy, S.; Nienhuis, J.
2017-12-01
River deltas are irreplaceable natural and societal resources, though they are at risk of drowning due to sea-level rise and decreased sediment delivery. To enhance hazard mitigation efforts in the face of global environmental change, we must understand the controls on delta growth. Previous empirical studies of delta growth are based on small datasets and often biased towards large, river-dominated deltas. We are currently lacking relationships that predict delta formation, area, or topset slope across the full breadth of global deltas. To this end, we developed a global dataset of 5,229 rivers (with and without deltas) paired with nine upstream (e.g., sediment discharge) and four downstream (e.g., wave height) environmental variables. Using Google Earth imagery, we identify all coastal river mouths (≥ 50 m wide) connected to an upstream catchment, and define deltas as river mouths that split into two or more distributary channels, end in a depositional protrusion from the shoreline, or do both. Delta area is defined as the area of the polygon connecting the delta node, two lateral shoreline extent points, and the basinward-most extent of the delta. Topset slope is calculated as the average, linear slope from the delta node elevation (extracted from SRTM data) to the main channel mouth, and shoreline and basinward extent points. Of the 5,229 rivers in our dataset, 1,816 (35%) have a delta. Using 495 rivers (those with data available for all variables), we build an empirically-derived relationship that predicts delta formation with 76% success. Delta formation is controlled predominantly by upstream water and sediment discharge, with secondary control by downstream waves and tides that suppress delta formation. For those rivers that do form deltas, we show that delta area is best predicted by sediment discharge, bathymetric slope, and drainage basin area (R2 = 0.95, n = 170), and exhibits a negative power-law relationship with topset slope (R2 = 0.85, n = 1,342). Topset slope is best predicted by grain size and wave height (R2 = 0.50, n = 358). These empirical relationships can aid in forecasting delta response to continued global environmental change.
LigandRNA: computational predictor of RNA–ligand interactions
Philips, Anna; Milanowska, Kaja; Łach, Grzegorz; Bujnicki, Janusz M.
2013-01-01
RNA molecules have recently become attractive as potential drug targets due to the increased awareness of their importance in key biological processes. The increase of the number of experimentally determined RNA 3D structures enabled structure-based searches for small molecules that can specifically bind to defined sites in RNA molecules, thereby blocking or otherwise modulating their function. However, as of yet, computational methods for structure-based docking of small molecule ligands to RNA molecules are not as well established as analogous methods for protein-ligand docking. This motivated us to create LigandRNA, a scoring function for the prediction of RNA–small molecule interactions. Our method employs a grid-based algorithm and a knowledge-based potential derived from ligand-binding sites in the experimentally solved RNA–ligand complexes. As an input, LigandRNA takes an RNA receptor file and a file with ligand poses. As an output, it returns a ranking of the poses according to their score. The predictive power of LigandRNA favorably compares to five other publicly available methods. We found that the combination of LigandRNA and Dock6 into a “meta-predictor” leads to further improvement in the identification of near-native ligand poses. The LigandRNA program is available free of charge as a web server at http://ligandrna.genesilico.pl. PMID:24145824
A finite deformation viscoelastic-viscoplastic constitutive model for self-healing materials
NASA Astrophysics Data System (ADS)
Shahsavari, H.; Naghdabadi, R.; Baghani, M.; Sohrabpour, S.
2016-12-01
In this paper, employing the Hencky strain, viscoelastic-viscoplastic response of self-healing materials is investigated. Considering the irreversible thermodynamics and using the effective configuration in the Continuum Damage-Healing Mechanics (CDHM), a phenomenological finite strain viscoelastic-viscoplastic constitutive model is presented. Considering finite viscoelastic and viscoplastic deformations, total deformation gradient is multiplicatively decomposed into viscoelastic and viscoplastic parts. Due to mathematical advantages and physical meaning of Hencky strain, this measure of strain is employed in the constitutive model development. In this regard, defining the damage and healing variables and employing the strain equivalence hypothesis, the strain tensor is determined in the effective configuration. Satisfying the Clausius-Duhem inequality, the evolution equations are introduced for the viscoelastic and viscoplastic strains. The damage and healing variables also evolve according to two different prescribed functions. To employ the proposed model in different loading conditions, the model is discretized in the semi-implicit form. Material parameters of the model are identified employing experimental tests on asphalt mixes available in the literature. Finally, capability of the model is demonstrated comparing the model predictions in the creep-recovery and repeated creep-recovery with the experimental results available in the literature and a good agreement between predicted and test results is revealed.
New generation of hydraulic pedotransfer functions for Europe
Tóth, B; Weynants, M; Nemes, A; Makó, A; Bilas, G; Tóth, G
2015-01-01
A range of continental-scale soil datasets exists in Europe with different spatial representation and based on different principles. We developed comprehensive pedotransfer functions (PTFs) for applications principally on spatial datasets with continental coverage. The PTF development included the prediction of soil water retention at various matric potentials and prediction of parameters to characterize soil moisture retention and the hydraulic conductivity curve (MRC and HCC) of European soils. We developed PTFs with a hierarchical approach, determined by the input requirements. The PTFs were derived by using three statistical methods: (i) linear regression where there were quantitative input variables, (ii) a regression tree for qualitative, quantitative and mixed types of information and (iii) mean statistics of developer-defined soil groups (class PTF) when only qualitative input parameters were available. Data of the recently established European Hydropedological Data Inventory (EU-HYDI), which holds the most comprehensive geographical and thematic coverage of hydro-pedological data in Europe, were used to train and test the PTFs. The applied modelling techniques and the EU-HYDI allowed the development of hydraulic PTFs that are more reliable and applicable for a greater variety of input parameters than those previously available for Europe. Therefore the new set of PTFs offers tailored advanced tools for a wide range of applications in the continent. PMID:25866465
Viscosity Prediction for Petroleum Fluids Using Free Volume Theory and PC-SAFT
NASA Astrophysics Data System (ADS)
Khoshnamvand, Younes; Assareh, Mehdi
2018-04-01
In this study, free volume theory ( FVT) in combination with perturbed-chain statistical associating fluid theory is implemented for viscosity prediction of petroleum reservoir fluids containing ill-defined components such as cuts and plus fractions. FVT has three adjustable parameters for each component to calculate viscosity. These three parameters for petroleum cuts (especially plus fractions) are not available. In this work, these parameters are determined for different petroleum fractions. A model as a function of molecular weight and specific gravity is developed using 22 real reservoir fluid samples with API grades in the range of 22 to 45. Afterward, the proposed model accuracy in comparison with the accuracy of De la Porte et al. with reference to experimental data is presented. The presented model is used for six real samples in an evaluation step, and the results are compared with available experimental data and the method of De la Porte et al. Finally, the method of Lohrenz et al. and the method of Pedersen et al. as two common industrial methods for viscosity calculation are compared with the proposed approach. The absolute average deviation was 9.7 % for free volume theory method, 15.4 % for Lohrenz et al., and 22.16 for Pedersen et al.
Calculation of vortex lift effect for cambered wings by the suction analogy
NASA Technical Reports Server (NTRS)
Lan, C. E.; Chang, J. F.
1981-01-01
An improved version of Woodward's chord plane aerodynamic panel method for subsonic and supersonic flow is developed for cambered wings exhibiting edge separated vortex flow, including those with leading edge vortex flaps. The exact relation between leading edge thrust and suction force in potential flow is derived. Instead of assuming the rotated suction force to be normal to wing surface at the leading edge, new orientation for the rotated suction force is determined through consideration of the momentum principle. The supersonic suction analogy method is improved by using an effective angle of attack defined through a semi-empirical method. Comparisons of predicted results with available data in subsonic and supersonic flow are presented.
Soil spectral characterization
NASA Technical Reports Server (NTRS)
Stoner, E. R.; Baumgardner, M. F.
1981-01-01
The spectral characterization of soils is discussed with particular reference to the bidirectional reflectance factor as a quantitative measure of soil spectral properties, the role of soil color, soil parameters affecting soil reflectance, and field characteristics of soil reflectance. Comparisons between laboratory-measured soil spectra and Landsat MSS data have shown good agreement, especially in discriminating relative drainage conditions and organic matter levels in unvegetated soils. The capacity to measure both visible and infrared soil reflectance provides information on other soil characteristics and makes it possible to predict soil response to different management conditions. Field and laboratory soil spectral characterization helps define the extent to which intrinsic spectral information is available from soils as a consequence of their composition and field characteristics.
NASA Technical Reports Server (NTRS)
Greco, R. V.; Eaton, L. R.; Wilkinson, H. C.
1974-01-01
The work is summarized which was accomplished from January 1974 to October 1974 for the Zero-Gravity Atmospheric Cloud Physics Laboratory. The definition and development of an atmospheric cloud physics laboratory and the selection and delineation of candidate experiments that require the unique environment of zero gravity or near zero gravity are reported. The experiment program and the laboratory concept for a Spacelab payload to perform cloud microphysics research are defined. This multimission laboratory is planned to be available to the entire scientific community to utilize in furthering the basic understanding of cloud microphysical processes and phenomenon, thereby contributing to improved weather prediction and ultimately to provide beneficial weather control and modification.
Can the theory of planned behaviour predict the physical activity behaviour of individuals?
Hobbs, Nicola; Dixon, Diane; Johnston, Marie; Howie, Kate
2013-01-01
The theory of planned behaviour (TPB) can identify cognitions that predict differences in behaviour between individuals. However, it is not clear whether the TPB can predict the behaviour of an individual person. This study employs a series of n-of-1 studies and time series analyses to examine the ability of the TPB to predict physical activity (PA) behaviours of six individuals. Six n-of-1 studies were conducted, in which TPB cognitions and up to three PA behaviours (walking, gym workout and a personally defined PA) were measured twice daily for six weeks. Walking was measured by pedometer step count, gym attendance by self-report with objective validation of gym entry and the personally defined PA behaviour by self-report. Intra-individual variability in TPB cognitions and PA behaviour was observed in all participants. The TPB showed variable predictive utility within individuals and across behaviours. The TPB predicted at least one PA behaviour for five participants but had no predictive utility for one participant. Thus, n-of-1 designs and time series analyses can be used to test theory in an individual.
NASA Technical Reports Server (NTRS)
Lind, Richard C. (Inventor); Brenner, Martin J.
2001-01-01
A structured singular value (mu) analysis method of computing flutter margins has robust stability of a linear aeroelastic model with uncertainty operators (Delta). Flight data is used to update the uncertainty operators to accurately account for errors in the computed model and the observed range of aircraft dynamics of the aircraft under test caused by time-varying aircraft parameters, nonlinearities, and flight anomalies, such as test nonrepeatability. This mu-based approach computes predict flutter margins that are worst case with respect to the modeling uncertainty for use in determining when the aircraft is approaching a flutter condition and defining an expanded safe flight envelope for the aircraft that is accepted with more confidence than traditional methods that do not update the analysis algorithm with flight data by introducing mu as a flutter margin parameter that presents several advantages over tracking damping trends as a measure of a tendency to instability from available flight data.
Vandenhove, H; Gil-García, C; Rigol, A; Vidal, M
2009-09-01
Predicting the transfer of radionuclides in the environment for normal release, accidental, disposal or remediation scenarios in order to assess exposure requires the availability of an important number of generic parameter values. One of the key parameters in environmental assessment is the solid liquid distribution coefficient, K(d), which is used to predict radionuclide-soil interaction and subsequent radionuclide transport in the soil column. This article presents a review of K(d) values for uranium, radium, lead, polonium and thorium based on an extensive literature survey, including recent publications. The K(d) estimates were presented per soil groups defined by their texture and organic matter content (Sand, Loam, Clay and Organic), although the texture class seemed not to significantly affect K(d). Where relevant, other K(d) classification systems are proposed and correlations with soil parameters are highlighted. The K(d) values obtained in this compilation are compared with earlier review data.
Schütte, Moritz; Risch, Thomas; Abdavi-Azar, Nilofar; Boehnke, Karsten; Schumacher, Dirk; Keil, Marlen; Yildiriman, Reha; Jandrasits, Christine; Borodina, Tatiana; Amstislavskiy, Vyacheslav; Worth, Catherine L.; Schweiger, Caroline; Liebs, Sandra; Lange, Martin; Warnatz, Hans- Jörg; Butcher, Lee M.; Barrett, James E.; Sultan, Marc; Wierling, Christoph; Golob-Schwarzl, Nicole; Lax, Sigurd; Uranitsch, Stefan; Becker, Michael; Welte, Yvonne; Regan, Joseph Lewis; Silvestrov, Maxine; Kehler, Inge; Fusi, Alberto; Kessler, Thomas; Herwig, Ralf; Landegren, Ulf; Wienke, Dirk; Nilsson, Mats; Velasco, Juan A.; Garin-Chesa, Pilar; Reinhard, Christoph; Beck, Stephan; Schäfer, Reinhold; Regenbrecht, Christian R. A.; Henderson, David; Lange, Bodo; Haybaeck, Johannes; Keilholz, Ulrich; Hoffmann, Jens; Lehrach, Hans; Yaspo, Marie-Laure
2017-01-01
Colorectal carcinoma represents a heterogeneous entity, with only a fraction of the tumours responding to available therapies, requiring a better molecular understanding of the disease in precision oncology. To address this challenge, the OncoTrack consortium recruited 106 CRC patients (stages I–IV) and developed a pre-clinical platform generating a compendium of drug sensitivity data totalling >4,000 assays testing 16 clinical drugs on patient-derived in vivo and in vitro models. This large biobank of 106 tumours, 35 organoids and 59 xenografts, with extensive omics data comparing donor tumours and derived models provides a resource for advancing our understanding of CRC. Models recapitulate many of the genetic and transcriptomic features of the donors, but defined less complex molecular sub-groups because of the loss of human stroma. Linking molecular profiles with drug sensitivity patterns identifies novel biomarkers, including a signature outperforming RAS/RAF mutations in predicting sensitivity to the EGFR inhibitor cetuximab. PMID:28186126
Coupled thermomechanical behavior of graphene using the spring-based finite element approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Georgantzinos, S. K., E-mail: sgeor@mech.upatras.gr; Anifantis, N. K., E-mail: nanif@mech.upatras.gr; Giannopoulos, G. I., E-mail: ggiannopoulos@teiwest.gr
The prediction of the thermomechanical behavior of graphene using a new coupled thermomechanical spring-based finite element approach is the aim of this work. Graphene sheets are modeled in nanoscale according to their atomistic structure. Based on molecular theory, the potential energy is defined as a function of temperature, describing the interatomic interactions in different temperature environments. The force field is approached by suitable straight spring finite elements. Springs simulate the interatomic interactions and interconnect nodes located at the atomic positions. Their stiffness matrix is expressed as a function of temperature. By using appropriate boundary conditions, various different graphene configurations aremore » analyzed and their thermo-mechanical response is approached using conventional finite element procedures. A complete parametric study with respect to the geometric characteristics of graphene is performed, and the temperature dependency of the elastic material properties is finally predicted. Comparisons with available published works found in the literature demonstrate the accuracy of the proposed method.« less
Thomas, D. M.; Bouchard, C.; Church, T.; Slentz, C.; Kraus, W. E.; Redman, L. M.; Martin, C. K.; Silva, A. M.; Vossen, M.; Westerterp, K.; Heymsfield, S. B.
2013-01-01
Summary Weight loss resulting from an exercise intervention tends to be lower than predicted. Modest weight loss can arise from an increase in energy intake, physiological reductions in resting energy expenditure, an increase in lean tissue or a decrease in non-exercise activity. Lower than expected, weight loss could also arise from weak and invalidated assumptions within predictive models. To investigate these causes, we systematically reviewed studies that monitored compliance to exercise prescriptions and measured exercise-induced change in body composition. Changed body energy stores were calculated to determine the deficit between total daily energy intake and energy expenditures. This information combined with available measurements was used to critically evaluate explanations for low exercise-induced weight loss. We conclude that the small magnitude of weight loss observed from the majority of evaluated exercise interventions is primarily due to low doses of prescribed exercise energy expenditures compounded by a concomitant increase in caloric intake. PMID:22681398
Does climate undermine subjective well-being? A 58-nation study.
Fischer, Ronald; Van de Vliert, Evert
2011-08-01
The authors test predictions from climato-economic theories of culture that climate and wealth interact in their influence on psychological processes. Demanding climates (defined as colder than temperate and hotter than temperate climates) create potential threats for humans. If these demands can be met by available economic resources, individuals experience challenging opportunities for self-expression and personal growth and consequently will report lowest levels of ill-being. If threatening climatic demands cannot be met by resources, resulting levels of reported ill-being will be highest. These predictions are confirmed in nation-level means of health complaints, burnout, anxiety, and depression across 58 societies. Climate, wealth, and their interaction together account for 35% of the variation in overall subjective ill-being, even when controlling for known predictors of subjective well-being. Further investigations of the process suggest that cultural individualism does not mediate these effects, but subjective well-being may function as a mediator of the impact of ecological variables on ill-being.
NASA Astrophysics Data System (ADS)
Lange, Rense
2015-02-01
An extension of concurrent validity is proposed that uses qualitative data for the purpose of validating quantitative measures. The approach relies on Latent Semantic Analysis (LSA) which places verbal (written) statements in a high dimensional semantic space. Using data from a medical / psychiatric domain as a case study - Near Death Experiences, or NDE - we established concurrent validity by connecting NDErs qualitative (written) experiential accounts with their locations on a Rasch scalable measure of NDE intensity. Concurrent validity received strong empirical support since the variance in the Rasch measures could be predicted reliably from the coordinates of their accounts in the LSA derived semantic space (R2 = 0.33). These coordinates also predicted NDErs age with considerable precision (R2 = 0.25). Both estimates are probably artificially low due to the small available data samples (n = 588). It appears that Rasch scalability of NDE intensity is a prerequisite for these findings, as each intensity level is associated (at least probabilistically) with a well- defined pattern of item endorsements.
Witkiewicz, Agnieszka K; Balaji, Uthra; Eslinger, Cody; McMillan, Elizabeth; Conway, William; Posner, Bruce; Mills, Gordon B; O'Reilly, Eileen M; Knudsen, Erik S
2016-08-16
Pancreatic ductal adenocarcinoma (PDAC) harbors the worst prognosis of any common solid tumor, and multiple failed clinical trials indicate therapeutic recalcitrance. Here, we use exome sequencing of patient tumors and find multiple conserved genetic alterations. However, the majority of tumors exhibit no clearly defined therapeutic target. High-throughput drug screens using patient-derived cell lines found rare examples of sensitivity to monotherapy, with most models requiring combination therapy. Using PDX models, we confirmed the effectiveness and selectivity of the identified treatment responses. Out of more than 500 single and combination drug regimens tested, no single treatment was effective for the majority of PDAC tumors, and each case had unique sensitivity profiles that could not be predicted using genetic analyses. These data indicate a shortcoming of reliance on genetic analysis to predict efficacy of currently available agents against PDAC and suggest that sensitivity profiling of patient-derived models could inform personalized therapy design for PDAC. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
Sarkozy, Clémentine; Camus, Vincent; Tilly, Hervé; Salles, Gilles; Jardin, Fabrice
2015-07-01
Diffuse large B-cell lymphoma (DLBCL) is the most common form of aggressive non-Hodgkin lymphoma, accounting for 30-40% of newly diagnosed cases. Obesity is a well-defined risk factor for DLBCL. However, the impact of body mass index (BMI) on DLBCL prognosis is controversial. Recent studies suggest that skeletal muscle wasting (sarcopenia) or loss of fat mass can be detected by computed tomography (CT) images and is useful for predicting the clinical outcome in several types of cancer including DLBCL. Several hypotheses have been proposed to explain the differences in DLBCL outcome according to BMI or weight that include tolerance to treatment, inflammatory background and chemotherapy or rituximab metabolism. In this review, we summarize the available literature, addressing the impact and physiopathological relevance of simple anthropometric tools including BMI and tissue distribution measurements. We also discuss their relationship with other nutritional parameters and their potential role in the management of patients with DLBCL.
Biederman, J; Petty, C R; Fried, R; Doyle, A E; Spencer, T; Seidman, L J; Gross, L; Poetzl, K; Faraone, S V
2007-08-01
Although individuals with attention deficit-hyperactivity disorder (ADHD) commonly exhibit deficits in executive functions that greatly increase the morbidity of the disorder, all available information on the subject is cross sectional. Males (n = 85) 9-22 years with ADHD followed over 7 years into young adulthood were assessed on measures of sustained attention/vigilance, planning and organization, response inhibition, set shifting and categorization, selective attention and visual scanning, verbal and visual learning, and memory. A binary definition of executive function deficits (EFDs) was defined based on a subject manifesting at least two abnormal tests 1.5 standard deviations from controls. The majority of subjects maintained EFDs over time (kappa: 0.41, P < 0.001; sensitivity: 55%, specificity: 85%, positive predictive value: 69%, and negative predictive value: 75%). Considering the morbidity of EFDs, these findings stress the importance of their early recognition for prevention and early intervention strategies. EFDs are stable over time.
The influence of hydrogen bonding on partition coefficients
NASA Astrophysics Data System (ADS)
Borges, Nádia Melo; Kenny, Peter W.; Montanari, Carlos A.; Prokopczyk, Igor M.; Ribeiro, Jean F. R.; Rocha, Josmar R.; Sartori, Geraldo Rodrigues
2017-02-01
This Perspective explores how consideration of hydrogen bonding can be used to both predict and better understand partition coefficients. It is shown how polarity of both compounds and substructures can be estimated from measured alkane/water partition coefficients. When polarity is defined in this manner, hydrogen bond donors are typically less polar than hydrogen bond acceptors. Analysis of alkane/water partition coefficients in conjunction with molecular electrostatic potential calculations suggests that aromatic chloro substituents may be less lipophilic than is generally believed and that some of the effect of chloro-substitution stems from making the aromatic π-cloud less available to hydrogen bond donors. Relationships between polarity and calculated hydrogen bond basicity are derived for aromatic nitrogen and carbonyl oxygen. Aligned hydrogen bond acceptors appear to present special challenges for prediction of alkane/water partition coefficients and this may reflect `frustration' of solvation resulting from overlapping hydration spheres. It is also shown how calculated hydrogen bond basicity can be used to model the effect of aromatic aza-substitution on octanol/water partition coefficients.
Biomarkers for Psychiatry: The Journey from Fantasy to Fact, a Report of the 2013 CINP Think Tank
Millan, Mark J.; Bahn, Sabine; Bertolino, Alessandro; Turck, Christoph W.; Kapur, Shitij; Möller, Hans-Jürgen; Dean, Brian
2015-01-01
Background: A think tank sponsored by the Collegium Internationale Neuropsychopharmacologium (CINP) debated the status and prospects of biological markers for psychiatric disorders, focusing on schizophrenia and major depressive disorder. Methods: Discussions covered markers defining and predicting specific disorders or domains of dysfunction, as well as predicting and monitoring medication efficacy. Deliberations included clinically useful and viable biomarkers, why suitable markers are not available, and the need for tightly-controlled sample collection. Results: Different types of biomarkers, appropriate sensitivity, specificity, and broad-based exploitability were discussed. Whilst a number of candidates are in the discovery phases, all will require replication in larger, real-life cohorts. Clinical cost-effectiveness also needs to be established. Conclusions: Since a single measure is unlikely to suffice, multi-modal strategies look more promising, although they bring greater technical and implementation complexities. Identifying reproducible, robust biomarkers will probably require pre-competitive consortia to provide the resources needed to identify, validate, and develop the relevant clinical tests. PMID:25899066
Preliminary noise tradeoff study of a Mach 2.7 cruise aircraft
NASA Technical Reports Server (NTRS)
Mascitti, V. R.; Maglieri, D. J. (Editor); Raney, J. P. (Editor)
1979-01-01
NASA computer codes in the areas of preliminary sizing and enroute performance, takeoff and landing performance, aircraft noise prediction, and economics were used in a preliminary noise tradeoff study for a Mach 2.7 design supersonic cruise concept. Aerodynamic configuration data were based on wind-tunnel model tests and related analyses. Aircraft structural characteristics and weight were based on advanced structural design methodologies, assuming conventional titanium technology. The most advanced noise prediction techniques available were used, and aircraft operating costs were estimated using accepted industry methods. The 4-engines cycles included in the study were based on assumed 1985 technology levels. Propulsion data was provided by aircraft manufacturers. Additional empirical data is needed to define both noise reduction features and other operating characteristics of all engine cycles under study. Data on VCE design parameters, coannular nozzle inverted flow noise reduction and advanced mechanical suppressors are urgently needed to reduce the present uncertainties in studies of this type.
Effective Design of Multifunctional Peptides by Combining Compatible Functions
Diener, Christian; Garza Ramos Martínez, Georgina; Moreno Blas, Daniel; Castillo González, David A.; Corzo, Gerardo; Castro-Obregon, Susana; Del Rio, Gabriel
2016-01-01
Multifunctionality is a common trait of many natural proteins and peptides, yet the rules to generate such multifunctionality remain unclear. We propose that the rules defining some protein/peptide functions are compatible. To explore this hypothesis, we trained a computational method to predict cell-penetrating peptides at the sequence level and learned that antimicrobial peptides and DNA-binding proteins are compatible with the rules of our predictor. Based on this finding, we expected that designing peptides for CPP activity may render AMP and DNA-binding activities. To test this prediction, we designed peptides that embedded two independent functional domains (nuclear localization and yeast pheromone activity), linked by optimizing their composition to fit the rules characterizing cell-penetrating peptides. These peptides presented effective cell penetration, DNA-binding, pheromone and antimicrobial activities, thus confirming the effectiveness of our computational approach to design multifunctional peptides with potential therapeutic uses. Our computational implementation is available at http://bis.ifc.unam.mx/en/software/dcf. PMID:27096600
Cascão, Angela Maria; Jorge, Maria Helena Prado de Mello; Costa, Antonio José Leal; Kale, Pauline Lorena
2016-01-01
Ill-defined causes of death are common among the elderly owing to the high frequency of comorbidities and, consequently, to the difficulty in defining the underlying cause of death. To analyze the validity and reliability of the "primary diagnosis" in hospitalization to recover the information on the underlying cause of death in natural deaths among the elderly whose deaths were originally assigned to "ill-defined cause" in their Death Certificate. The hospitalizations occurred in the state of Rio de Janeiro, in 2006. The databases obtained in the Information Systems on Mortality and Hospitalization were probabilistically linked. The following data were calculated for hospitalizations of the elderly that evolved into deaths with a natural cause: concordance percentages, Kappa coefficient, sensitivity, specificity, and the positive predictive value of the primary diagnosis. Deaths related to "ill-defined causes" were assigned to a new cause, which was defined based on the primary diagnosis. The reliability of the primary diagnosis was good, according to the total percentage of consistency (50.2%), and fair, according to the Kappa coefficient (k = 0.4; p < 0.0001). Diseases related to the circulatory system and neoplasia occurred with the highest frequency among the deaths and the hospitalizations and presented a higher consistency of positive predictive values per chapter and grouping of the International Classification of Diseases. The recovery of the information on the primary cause occurred in 22.6% of the deaths with ill-defined causes (n = 14). The methodology developed and applied for the recovery of the information on the natural cause of death among the elderly in this study had the advantage of effectiveness and the reduction of costs compared to an investigation of the death that is recommended in situations of non-linked and low positive predictive values. Monitoring the mortality profile by the cause of death is necessary to periodically update the predictive values.
Weiner, Kevin S; Barnett, Michael A; Witthoft, Nathan; Golarai, Golijeh; Stigliani, Anthony; Kay, Kendrick N; Gomez, Jesse; Natu, Vaidehi S; Amunts, Katrin; Zilles, Karl; Grill-Spector, Kalanit
2018-04-15
The parahippocampal place area (PPA) is a widely studied high-level visual region in the human brain involved in place and scene processing. The goal of the present study was to identify the most probable location of place-selective voxels in medial ventral temporal cortex. To achieve this goal, we first used cortex-based alignment (CBA) to create a probabilistic place-selective region of interest (ROI) from one group of 12 participants. We then tested how well this ROI could predict place selectivity in each hemisphere within a new group of 12 participants. Our results reveal that a probabilistic ROI (pROI) generated from one group of 12 participants accurately predicts the location and functional selectivity in individual brains from a new group of 12 participants, despite between subject variability in the exact location of place-selective voxels relative to the folding of parahippocampal cortex. Additionally, the prediction accuracy of our pROI is significantly higher than that achieved by volume-based Talairach alignment. Comparing the location of the pROI of the PPA relative to published data from over 500 participants, including data from the Human Connectome Project, shows a striking convergence of the predicted location of the PPA and the cortical location of voxels exhibiting the highest place selectivity across studies using various methods and stimuli. Specifically, the most predictive anatomical location of voxels exhibiting the highest place selectivity in medial ventral temporal cortex is the junction of the collateral and anterior lingual sulci. Methodologically, we make this pROI freely available (vpnl.stanford.edu/PlaceSelectivity), which provides a means to accurately identify a functional region from anatomical MRI data when fMRI data are not available (for example, in patient populations). Theoretically, we consider different anatomical and functional factors that may contribute to the consistent anatomical location of place selectivity relative to the folding of high-level visual cortex. Copyright © 2017 Elsevier Inc. All rights reserved.
2013-09-01
define CSTR .5 // 1/°C #define SKBFN 6.3 // liters/(h m^2) #define Skbfmax 90. // conservative could be higher for fit person 36 #define...WarmC=0; if (Tsk<TTSK) Colds=TTSK-Tsk; if (Tc>TTCR) WarmC=Tc-TTCR; Skbf=(SKBFN+CDIL*WarmC)/(1+ CSTR *Colds); // Liters/(h m^2) if (Skbf
Automated 3D structure composition for large RNAs
Popenda, Mariusz; Szachniuk, Marta; Antczak, Maciej; Purzycka, Katarzyna J.; Lukasiak, Piotr; Bartol, Natalia; Blazewicz, Jacek; Adamiak, Ryszard W.
2012-01-01
Understanding the numerous functions that RNAs play in living cells depends critically on knowledge of their three-dimensional structure. Due to the difficulties in experimentally assessing structures of large RNAs, there is currently great demand for new high-resolution structure prediction methods. We present the novel method for the fully automated prediction of RNA 3D structures from a user-defined secondary structure. The concept is founded on the machine translation system. The translation engine operates on the RNA FRABASE database tailored to the dictionary relating the RNA secondary structure and tertiary structure elements. The translation algorithm is very fast. Initial 3D structure is composed in a range of seconds on a single processor. The method assures the prediction of large RNA 3D structures of high quality. Our approach needs neither structural templates nor RNA sequence alignment, required for comparative methods. This enables the building of unresolved yet native and artificial RNA structures. The method is implemented in a publicly available, user-friendly server RNAComposer. It works in an interactive mode and a batch mode. The batch mode is designed for large-scale modelling and accepts atomic distance restraints. Presently, the server is set to build RNA structures of up to 500 residues. PMID:22539264
Roymondal, Uttam; Das, Shibsankar; Sahoo, Satyabrata
2009-01-01
We present an expression measure of a gene, devised to predict the level of gene expression from relative codon bias (RCB). There are a number of measures currently in use that quantify codon usage in genes. Based on the hypothesis that gene expressivity and codon composition is strongly correlated, RCB has been defined to provide an intuitively meaningful measure of an extent of the codon preference in a gene. We outline a simple approach to assess the strength of RCB (RCBS) in genes as a guide to their likely expression levels and illustrate this with an analysis of Escherichia coli (E. coli) genome. Our efforts to quantitatively predict gene expression levels in E. coli met with a high level of success. Surprisingly, we observe a strong correlation between RCBS and protein length indicating natural selection in favour of the shorter genes to be expressed at higher level. The agreement of our result with high protein abundances, microarray data and radioactive data demonstrates that the genomic expression profile available in our method can be applied in a meaningful way to the study of cell physiology and also for more detailed studies of particular genes of interest. PMID:19131380
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takeshita, Kenji
A mathematical model to predict the extraction behavior of metal ion between a polymer gel and an aqueous solution was proposed. It consists of the Flory-Huggins formula for evaluating thermodynamically the physico-chemical properties of polymer gel, the modified Stokes-Einstein equation to evaluate the mass transfer rate of metal ion into polymer gel and the equation to evaluate the extraction equilibrium. The extraction of lanthanide elements, Nd(III), Sm(III) and Gd(III), from an aqueous solution containing nitrate ion was carried out by the use of SDB (styrene-divinylbenzene copolymer) gel swollen with a bidentate organophosphorus compound, CMP (dihexyl-N,N-diethylcarbamoylmethylpohosphonate). The binary extraction and themore » effect of the crosslinking degree of SDB gel on the extraction rate were examined. These experimental results were in agreement with the predictions calculated by the proposed model. It was confirmed that the extraction behavior of lanthanide ions into the SDB gel was predicted accurately, when the physico-chemical properties of SDB gel, such as the affinity between SDB and CMP ({chi}) and the crosslinking degree ({nu}{sub e}), and a coefficient defined in the modified Stokes-Einstein equation (K{sub 0}) were known. This model is available as a tool to design an extraction chromatographic process using polymer gel.« less
Expediting topology data gathering for the TOPDB database.
Dobson, László; Langó, Tamás; Reményi, István; Tusnády, Gábor E
2015-01-01
The Topology Data Bank of Transmembrane Proteins (TOPDB, http://topdb.enzim.ttk.mta.hu) contains experimentally determined topology data of transmembrane proteins. Recently, we have updated TOPDB from several sources and utilized a newly developed topology prediction algorithm to determine the most reliable topology using the results of experiments as constraints. In addition to collecting the experimentally determined topology data published in the last couple of years, we gathered topographies defined by the TMDET algorithm using 3D structures from the PDBTM. Results of global topology analysis of various organisms as well as topology data generated by high throughput techniques, like the sequential positions of N- or O-glycosylations were incorporated into the TOPDB database. Moreover, a new algorithm was developed to integrate scattered topology data from various publicly available databases and a new method was introduced to measure the reliability of predicted topologies. We show that reliability values highly correlate with the per protein topology accuracy of the utilized prediction method. Altogether, more than 52,000 new topology data and more than 2600 new transmembrane proteins have been collected since the last public release of the TOPDB database. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
Multi-objective optimization to predict muscle tensions in a pinch function using genetic algorithm
NASA Astrophysics Data System (ADS)
Bensghaier, Amani; Romdhane, Lotfi; Benouezdou, Fethi
2012-03-01
This work is focused on the determination of the thumb and the index finger muscle tensions in a tip pinch task. A biomechanical model of the musculoskeletal system of the thumb and the index finger is developed. Due to the assumptions made in carrying out the biomechanical model, the formulated force analysis problem is indeterminate leading to an infinite number of solutions. Thus, constrained single and multi-objective optimization methodologies are used in order to explore the muscular redundancy and to predict optimal muscle tension distributions. Various models are investigated using the optimization process. The basic criteria to minimize are the sum of the muscle stresses, the sum of individual muscle tensions and the maximum muscle stress. The multi-objective optimization is solved using a Pareto genetic algorithm to obtain non-dominated solutions, defined as the set of optimal distributions of muscle tensions. The results show the advantage of the multi-objective formulation over the single objective one. The obtained solutions are compared to those available in the literature demonstrating the effectiveness of our approach in the analysis of the fingers musculoskeletal systems when predicting muscle tensions.
Multivariate Cholesky models of human female fertility patterns in the NLSY.
Rodgers, Joseph Lee; Bard, David E; Miller, Warren B
2007-03-01
Substantial evidence now exists that variables measuring or correlated with human fertility outcomes have a heritable component. In this study, we define a series of age-sequenced fertility variables, and fit multivariate models to account for underlying shared genetic and environmental sources of variance. We make predictions based on a theory developed by Udry [(1996) Biosocial models of low-fertility societies. In: Casterline, JB, Lee RD, Foote KA (eds) Fertility in the United States: new patterns, new theories. The Population Council, New York] suggesting that biological/genetic motivations can be more easily realized and measured in settings in which fertility choices are available. Udry's theory, along with principles from molecular genetics and certain tenets of life history theory, allow us to make specific predictions about biometrical patterns across age. Consistent with predictions, our results suggest that there are different sources of genetic influence on fertility variance at early compared to later ages, but that there is only one source of shared environmental influence that occurs at early ages. These patterns are suggestive of the types of gene-gene and gene-environment interactions for which we must account to better understand individual differences in fertility outcomes.
Cox, William T. L.; Devine, Patricia G.
2015-01-01
We advance a theory-driven approach to stereotype structure, informed by connectionist theories of cognition. Whereas traditional models define or tacitly assume that stereotypes possess inherently Group → Attribute activation directionality (e.g., Black activates criminal), our model predicts heterogeneous stereotype directionality. Alongside the classically studied Group → Attribute stereotypes, some stereotypes should be bidirectional (i.e., Group ⇄ Attribute) and others should have Attribute → Group unidirectionality (e.g., fashionable activates gay). We tested this prediction in several large-scale studies with human participants (NCombined = 4,817), assessing stereotypic inferences among various groups and attributes. Supporting predictions, we found heterogeneous directionality both among the stereotype links related to a given social group and also between the links of different social groups. These efforts yield rich datasets that map the networks of stereotype links related to several social groups. We make these datasets publicly available, enabling other researchers to explore a number of questions related to stereotypes and stereotyping. Stereotype directionality is an understudied feature of stereotypes and stereotyping with widespread implications for the development, measurement, maintenance, expression, and change of stereotypes, stereotyping, prejudice, and discrimination. PMID:25811181
Cox, William T L; Devine, Patricia G
2015-01-01
We advance a theory-driven approach to stereotype structure, informed by connectionist theories of cognition. Whereas traditional models define or tacitly assume that stereotypes possess inherently Group → Attribute activation directionality (e.g., Black activates criminal), our model predicts heterogeneous stereotype directionality. Alongside the classically studied Group → Attribute stereotypes, some stereotypes should be bidirectional (i.e., Group ⇄ Attribute) and others should have Attribute → Group unidirectionality (e.g., fashionable activates gay). We tested this prediction in several large-scale studies with human participants (NCombined = 4,817), assessing stereotypic inferences among various groups and attributes. Supporting predictions, we found heterogeneous directionality both among the stereotype links related to a given social group and also between the links of different social groups. These efforts yield rich datasets that map the networks of stereotype links related to several social groups. We make these datasets publicly available, enabling other researchers to explore a number of questions related to stereotypes and stereotyping. Stereotype directionality is an understudied feature of stereotypes and stereotyping with widespread implications for the development, measurement, maintenance, expression, and change of stereotypes, stereotyping, prejudice, and discrimination.
26 CFR 1.148-1 - Definitions and elections.
Code of Federal Regulations, 2010 CFR
2010-04-01
...). Annuity contract means annuity contract as defined in section 72. Available amount means available amount as defined in § 1.148-6(d)(3)(iii). Bona fide debt service fund means a fund, which may include... defined in section 1273(a)(1)) or premium on an obligation— (i) An amount that does not exceed 2 percent...
PARTS: Probabilistic Alignment for RNA joinT Secondary structure prediction
Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H.
2008-01-01
A novel method is presented for joint prediction of alignment and common secondary structures of two RNA sequences. The joint consideration of common secondary structures and alignment is accomplished by structural alignment over a search space defined by the newly introduced motif called matched helical regions. The matched helical region formulation generalizes previously employed constraints for structural alignment and thereby better accommodates the structural variability within RNA families. A probabilistic model based on pseudo free energies obtained from precomputed base pairing and alignment probabilities is utilized for scoring structural alignments. Maximum a posteriori (MAP) common secondary structures, sequence alignment and joint posterior probabilities of base pairing are obtained from the model via a dynamic programming algorithm called PARTS. The advantage of the more general structural alignment of PARTS is seen in secondary structure predictions for the RNase P family. For this family, the PARTS MAP predictions of secondary structures and alignment perform significantly better than prior methods that utilize a more restrictive structural alignment model. For the tRNA and 5S rRNA families, the richer structural alignment model of PARTS does not offer a benefit and the method therefore performs comparably with existing alternatives. For all RNA families studied, the posterior probability estimates obtained from PARTS offer an improvement over posterior probability estimates from a single sequence prediction. When considering the base pairings predicted over a threshold value of confidence, the combination of sensitivity and positive predictive value is superior for PARTS than for the single sequence prediction. PARTS source code is available for download under the GNU public license at http://rna.urmc.rochester.edu. PMID:18304945
Planning for subacute care: predicting demand using acute activity data.
Green, Janette P; McNamee, Jennifer P; Kobel, Conrad; Seraji, Md Habibur R; Lawrence, Suanne J
2016-01-01
Objective The aim of the present study was to develop a robust model that uses the concept of 'rehabilitation-sensitive' Diagnosis Related Groups (DRGs) in predicting demand for rehabilitation and geriatric evaluation and management (GEM) care following acute in-patient episodes provided in Australian hospitals. Methods The model was developed using statistical analyses of national datasets, informed by a panel of expert clinicians and jurisdictional advice. Logistic regression analysis was undertaken using acute in-patient data, published national hospital statistics and data from the Australasian Rehabilitation Outcomes Centre. Results The predictive model comprises tables of probabilities that patients will require rehabilitation or GEM care after an acute episode, with columns defined by age group and rows defined by grouped Australian Refined (AR)-DRGs. Conclusions The existing concept of rehabilitation-sensitive DRGs was revised and extended. When applied to national data, the model provided a conservative estimate of 83% of the activity actually provided. An example demonstrates the application of the model for service planning. What is known about the topic? Health service planning is core business for jurisdictions and local areas. With populations ageing and an acknowledgement of the underservicing of subacute care, it is timely to find improved methods of estimating demand for this type of care. Traditionally, age-sex standardised utilisation rates for individual DRGs have been applied to Australian Bureau of Statistics (ABS) population projections to predict the future need for subacute services. Improved predictions became possible when some AR-DRGs were designated 'rehabilitation-sensitive'. This improved methodology has been used in several Australian jurisdictions. What does this paper add? This paper presents a new tool, or model, to predict demand for rehabilitation and GEM services based on in-patient acute activity. In this model, the methodology based on rehabilitation-sensitive AR-DRGs has been extended by updating them to AR-DRG Version 7.0, quantifying the level of 'sensitivity' and incorporating the patient's age to improve the prediction of demand for subacute services. What are the implications for practitioners? The predictive model takes the form of tables of probabilities that patients will require rehabilitation or GEM care after an acute episode and can be applied to acute in-patient administrative datasets in any Australian jurisdiction or local area. The use of patient-level characteristics will enable service planners to improve their forecasting of demand for these services. Clinicians and jurisdictional representatives consulted during the project regarded the model favourably and believed that it was an improvement on currently available methods.
Metabolomics biomarkers to predict acamprosate treatment response in alcohol-dependent subjects.
Hinton, David J; Vázquez, Marely Santiago; Geske, Jennifer R; Hitschfeld, Mario J; Ho, Ada M C; Karpyak, Victor M; Biernacka, Joanna M; Choi, Doo-Sup
2017-05-31
Precision medicine for alcohol use disorder (AUD) allows optimal treatment of the right patient with the right drug at the right time. Here, we generated multivariable models incorporating clinical information and serum metabolite levels to predict acamprosate treatment response. The sample of 120 patients was randomly split into a training set (n = 80) and test set (n = 40) five independent times. Treatment response was defined as complete abstinence (no alcohol consumption during 3 months of acamprosate treatment) while nonresponse was defined as any alcohol consumption during this period. In each of the five training sets, we built a predictive model using a least absolute shrinkage and section operator (LASSO) penalized selection method and then evaluated the predictive performance of each model in the corresponding test set. The models predicted acamprosate treatment response with a mean sensitivity and specificity in the test sets of 0.83 and 0.31, respectively, suggesting our model performed well at predicting responders, but not non-responders (i.e. many non-responders were predicted to respond). Studies with larger sample sizes and additional biomarkers will expand the clinical utility of predictive algorithms for pharmaceutical response in AUD.
A first approach to calculate BIOCLIM variables and climate zones for Antarctica
NASA Astrophysics Data System (ADS)
Wagner, Monika; Trutschnig, Wolfgang; Bathke, Arne C.; Ruprecht, Ulrike
2018-02-01
For testing the hypothesis that macroclimatological factors determine the occurrence, biodiversity, and species specificity of both symbiotic partners of Antarctic lecideoid lichens, we present a first approach for the computation of the full set of 19 BIOCLIM variables, as available at http://www.worldclim.org/ for all regions of the world with exception of Antarctica. Annual mean temperature (Bio 1) and annual precipitation (Bio 12) were chosen to define climate zones of the Antarctic continent and adjacent islands as required for ecological niche modeling (ENM). The zones are based on data for the years 2009-2015 which was obtained from the Antarctic Mesoscale Prediction System (AMPS) database of the Ohio State University. For both temperature and precipitation, two separate zonings were specified; temperature values were divided into 12 zones (named 1 to 12) and precipitation values into five (named A to E). By combining these two partitions, we defined climate zonings where each geographical point can be uniquely assigned to exactly one zone, which allows an immediate explicit interpretation. The soundness of the newly calculated climate zones was tested by comparison with already published data, which used only three zones defined on climate information from the literature. The newly defined climate zones result in a more precise assignment of species distribution to the single habitats. This study provides the basis for a more detailed continental-wide ENM using a comprehensive dataset of lichen specimens which are located within 21 different climate regions.
PyRosetta: a script-based interface for implementing molecular modeling algorithms using Rosetta
Chaudhury, Sidhartha; Lyskov, Sergey; Gray, Jeffrey J.
2010-01-01
Summary: PyRosetta is a stand-alone Python-based implementation of the Rosetta molecular modeling package that allows users to write custom structure prediction and design algorithms using the major Rosetta sampling and scoring functions. PyRosetta contains Python bindings to libraries that define Rosetta functions including those for accessing and manipulating protein structure, calculating energies and running Monte Carlo-based simulations. PyRosetta can be used in two ways: (i) interactively, using iPython and (ii) script-based, using Python scripting. Interactive mode contains a number of help features and is ideal for beginners while script-mode is best suited for algorithm development. PyRosetta has similar computational performance to Rosetta, can be easily scaled up for cluster applications and has been implemented for algorithms demonstrating protein docking, protein folding, loop modeling and design. Availability: PyRosetta is a stand-alone package available at http://www.pyrosetta.org under the Rosetta license which is free for academic and non-profit users. A tutorial, user's manual and sample scripts demonstrating usage are also available on the web site. Contact: pyrosetta@graylab.jhu.edu PMID:20061306
A web application for automatic prediction of gene translation elongation efficiency.
Sokolov, Vladimir S; Zuraev, Bulat S; Lashin, Sergei A; Matushkin, Yury G
2015-03-01
Expression efficiency is one of the major characteristics describing genes in various modern investigations. Expression efficiency of genes is regulated at various stages: transcription, translation, posttranslational protein modification and others. In this study, a special EloE (Elongation Efficiency) web application is described. The EloE sorts the organism's genes in a descend order on their theoretical rate of the elongation stage of translation based on the analysis of their nucleotide sequences. Obtained theoretical data have a significant correlation with available experimental data of gene expression in various organisms. In addition, the program identifies preferential codons in organism's genes and defines distribution of potential secondary structures energy in 5´ and 3´ regions of mRNA. The EloE can be useful in preliminary estimation of translation elongation efficiency for genes for which experimental data are not available yet. Some results can be used, for instance, in other programs modeling artificial genetic structures in genetically engineered experiments. The EloE web application is available at http://www-bionet.sscc.ru:7780/EloE.
[Business intelligence in radiology. Challenges and opportunities].
Escher, A; Boll, D
2015-10-01
Due to economic pressures and need for higher transparency, a ubiquitous availability of administrative information is needed. Therefore radiology managers should consider implementing business intelligence (BI) solutions. BI is defined as a systemic approach to support decision-making in business administration. It is an important part of the overall strategy of an organization. Implementation and operation is initially associated with costs and for a successful launch important prerequisites must be fulfilled. First, a suitable product must be selected, followed by the technical and organizational implementation. After consideration of the type of data to be collected and a system of key performance indicators must be established. BI replaces classic retrospective business reporting with multidimensional and multifactorial analyses, real-time monitoring, and predictive analyses. The benefits of BI include the rapid availability of important information and the depth of possible data analysis. The simple and intuitive use of modern BI applications by the users themselves (!) combined with a continuous availability of information is the key to success. Professional BI will be an important part of management in radiology in the future.
Passive beam forming and spatial diversity in meteor scatter communication systems
NASA Astrophysics Data System (ADS)
Akram, Ammad; Cannon, Paul S.
1996-03-01
The method of passive beam formation using a four-element Butler matrix to improve the signal availability of meteor scatter communication systems is investigated. Signal availability, defined as the integrated time that the signal-to-noise ratio (snr) exceeds some snr threshold, serves as an important indicator of system performance. Butler matrix signal availability is compared with the performance of a single four-element Yagi reference system using ˜6.5 hours of data from a 720 km north-south temperate latitude link. The signal availability improvement factor of the Butler matrix is found to range between 1.6-1.8 over the snr threshold range of 20-30 dB in a 300-Hz bandwidth. Experimental values of the Butler matrix signal availability improvement factor are compared with analytical predictions. The experimental values show an expected snr threshold dependency with a dramatic increase at high snr. A theoretical analysis is developed to describe this increase. The signal availability can be further improved by ˜10-20% in a system employing two four-element Butler matrices with squinted beams so as to illuminate the sky with eight high-gain beams. Space diversity is found to increase the signal availability of a single antenna system by ˜10-15%, but the technique has very little advantage in a system already employing passive beam formation.
ERIC Educational Resources Information Center
Glover, Rebecca J.; Natesan, Prathiba; Wang, Jie; Rohr, Danielle; McAfee-Etheridge, Lauri; Booker, Dana D.; Bishop, James; Lee, David; Kildare, Cory; Wu, Minwei
2014-01-01
Explorations of relationships between Haidt's Moral Foundations Questionnaire (MFQ) and indices of moral decision-making assessed by the Defining Issues Test have been limited to correlational analyses. This study used Harm, Fairness, Ingroup, Authority and Purity to predict overall moral judgment and individual Defining Issues Test-2 (DIT-2)…
How accurate is our clinical prediction of "minimal prostate cancer"?
Leibovici, Dan; Shikanov, Sergey; Gofrit, Ofer N; Zagaja, Gregory P; Shilo, Yaniv; Shalhav, Arieh L
2013-07-01
Recommendations for active surveillance versus immediate treatment for low risk prostate cancer are based on biopsy and clinical data, assuming that a low volume of well-differentiated carcinoma will be associated with a low progression risk. However, the accuracy of clinical prediction of minimal prostate cancer (MPC) is unclear. To define preoperative predictors for MPC in prostatectomy specimens and to examine the accuracy of such prediction. Data collected on 1526 consecutive radical prostatectomy patients operated in a single center between 2003 and 2008 included: age, body mass index, preoperative prostate-specific antigen level, biopsy Gleason score, clinical stage, percentage of positive biopsy cores, and maximal core length (MCL) involvement. MPC was defined as < 5% of prostate volume involvement with organ-confined Gleason score < or = 6. Univariate and multivariate logistic regression analyses were used to define independent predictors of minimal disease. Classification and Regression Tree (CART) analysis was used to define cutoff values for the predictors and measure the accuracy of prediction. MPC was found in 241 patients (15.8%). Clinical stage, biopsy Gleason's score, percent of positive biopsy cores, and maximal involved core length were associated with minimal disease (OR 0.42, 0.1, 0.92, and 0.9, respectively). Independent predictors of MPC included: biopsy Gleason score, percent of positive cores and MCL (OR 0.21, 095 and 0.95, respectively). CART showed that when the MCL exceeded 11.5%, the likelihood of MPC was 3.8%. Conversely, when applying the most favorable preoperative conditions (Gleason < or = 6, < 20% positive cores, MCL < or = 11.5%) the chance of minimal disease was 41%. Biopsy Gleason score, the percent of positive cores and MCL are independently associated with MPC. While preoperative prediction of significant prostate cancer was accurate, clinical prediction of MPC was incorrect 59% of the time. Caution is necessary when implementing clinical data as selection criteria for active surveillance.
Clique Relaxations in Biological and Social Network Analysis Foundations and Algorithms
2015-10-26
study of clique relaxation models arising in biological and social networks. This project examines the elementary clique-defining properties... elementary clique-defining properties inherently exploited in the available clique relaxation models and pro- poses a taxonomic framework that not...analyzes the elementary clique-defining properties implicitly exploited in the available clique relaxation models and proposes a taxonomic framework that
Optimization of protein-protein docking for predicting Fc-protein interactions.
Agostino, Mark; Mancera, Ricardo L; Ramsland, Paul A; Fernández-Recio, Juan
2016-11-01
The antibody crystallizable fragment (Fc) is recognized by effector proteins as part of the immune system. Pathogens produce proteins that bind Fc in order to subvert or evade the immune response. The structural characterization of the determinants of Fc-protein association is essential to improve our understanding of the immune system at the molecular level and to develop new therapeutic agents. Furthermore, Fc-binding peptides and proteins are frequently used to purify therapeutic antibodies. Although several structures of Fc-protein complexes are available, numerous others have not yet been determined. Protein-protein docking could be used to investigate Fc-protein complexes; however, improved approaches are necessary to efficiently model such cases. In this study, a docking-based structural bioinformatics approach is developed for predicting the structures of Fc-protein complexes. Based on the available set of X-ray structures of Fc-protein complexes, three regions of the Fc, loosely corresponding to three turns within the structure, were defined as containing the essential features for protein recognition and used as restraints to filter the initial docking search. Rescoring the filtered poses with an optimal scoring strategy provided a success rate of approximately 80% of the test cases examined within the top ranked 20 poses, compared to approximately 20% by the initial unrestrained docking. The developed docking protocol provides a significant improvement over the initial unrestrained docking and will be valuable for predicting the structures of currently undetermined Fc-protein complexes, as well as in the design of peptides and proteins that target Fc. Copyright © 2016 John Wiley & Sons, Ltd.
Semiempirical prediction of protein folds
NASA Astrophysics Data System (ADS)
Fernández, Ariel; Colubri, Andrés; Appignanesi, Gustavo
2001-08-01
We introduce a semiempirical approach to predict ab initio expeditious pathways and native backbone geometries of proteins that fold under in vitro renaturation conditions. The algorithm is engineered to incorporate a discrete codification of local steric hindrances that constrain the movements of the peptide backbone throughout the folding process. Thus, the torsional state of the chain is assumed to be conditioned by the fact that hopping from one basin of attraction to another in the Ramachandran map (local potential energy surface) of each residue is energetically more costly than the search for a specific (Φ, Ψ) torsional state within a single basin. A combinatorial procedure is introduced to evaluate coarsely defined torsional states of the chain defined ``modulo basins'' and translate them into meaningful patterns of long range interactions. Thus, an algorithm for structure prediction is designed based on the fact that local contributions to the potential energy may be subsumed into time-evolving conformational constraints defining sets of restricted backbone geometries whereupon the patterns of nonbonded interactions are constructed. The predictive power of the algorithm is assessed by (a) computing ab initio folding pathways for mammalian ubiquitin that ultimately yield a stable structural pattern reproducing all of its native features, (b) determining the nucleating event that triggers the hydrophobic collapse of the chain, and (c) comparing coarse predictions of the stable folds of moderately large proteins (N~100) with structural information extracted from the protein data bank.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hay, J.; Schwender, J.
Computational simulation of large-scale biochemical networks can be used to analyze and predict the metabolic behavior of an organism, such as a developing seed. Based on the biochemical literature, pathways databases and decision rules defining reaction directionality we reconstructed bna572, a stoichiometric metabolic network model representing Brassica napus seed storage metabolism. In the highly compartmentalized network about 25% of the 572 reactions are transport reactions interconnecting nine subcellular compartments and the environment. According to known physiological capabilities of developing B. napus embryos, four nutritional conditions were defined to simulate heterotrophy or photoheterotrophy, each in combination with the availability of inorganicmore » nitrogen (ammonia, nitrate) or amino acids as nitrogen sources. Based on mathematical linear optimization the optimal solution space was comprehensively explored by flux variability analysis, thereby identifying for each reaction the range of flux values allowable under optimality. The range and variability of flux values was then categorized into flux variability types. Across the four nutritional conditions, approximately 13% of the reactions have variable flux values and 10-11% are substitutable (can be inactive), both indicating metabolic redundancy given, for example, by isoenzymes, subcellular compartmentalization or the presence of alternative pathways. About one-third of the reactions are never used and are associated with pathways that are suboptimal for storage synthesis. Fifty-seven reactions change flux variability type among the different nutritional conditions, indicating their function in metabolic adjustments. This predictive modeling framework allows analysis and quantitative exploration of storage metabolism of a developing B. napus oilseed.« less
A new score for screening of malnutrition in patients with inoperable gastric adenocarcinoma.
Esfahani, Ali; Somi, Mohammad Hossein; Asghari Jafarabadi, Mohammad; Ostadrahimi, Alireza; Ghayour Nahand, Mousa; Fathifar, Zahra; Doostzadeh, Akram; Ghoreishi, Zohreh
2017-06-01
Malnutrition is common in patients with gastric cancer. Early identification of malnourished patients results in improving quality of life. We aimed to assess the nutritional status of patients with inoperable gastric adenocarcinoma (IGA) and finding a precise malnutrition screening score for these patients before the onset of chemotherapy. Nutritional status was assessed using patient generated subjective global assessment (PG-SGA), visceral proteins, and high-sensitivity C reactive protein. Tumor markers of carcinoembryonic antigen (CEA), carbohydrate antigen 125 (CA-125) and CA 19-9 and their association with nutritional status were assessed. Then a new score for malnutrition screening was defined. Seventy-one patients with IGA completed the study. Malnourished and well-nourished patients (based on PG-SGA) were statistically different regarding albumin, prealbumin and CA-125. The best cut-off value for prealbumin for prediction of malnutrition was determined at 0.20 mg/dl and using known cut-off values for albumin (3.5 g/dl) and CA-125 (35 U/ml), a new score was defined for malnutrition screening named MS-score. According to MS-score, 92% of the patients had malnutrition and it could predict malnutrition with 96.8% sensitivity, 50% specificity and accuracy of 91.4%. MS-score has been suggested as an available and easy-to-use tool for malnutrition screening in patients with IGA. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Bozbay, Mehmet; Uyarel, Huseyin; Avsar, Sahin; Oz, Ahmet; Keskin, Muhammed; Tanik, Veysel Ozan; Bakhshaliyev, Nijat; Ugur, Murat; Pehlivanoglu, Seckin; Eren, Mehmet
2015-12-01
Creatinine kinase isoenzyme-MB (CK-MB) is a biomarker for detecting myocardial injury. The aim of this study was to evaluate the association between admission CK-MB levels and in-hospital and long-term clinical outcomes in pulmonary embolism (PE) patients treated with thrombolytic tissue-plasminogen activator. A total of 148 acute PE patients treated with tissue-plasminogen activator enrolled in the study. The study population was divided into 2 tertiles, based on admission CK-MB levels. The high CK-MB group (n=35) was defined as having a CK-MB level in the third tertile (>31.5 U/L), and the low group (n=113) was defined as having a level in the lower 2 tertiles (≤31.5 U/L). High CK-MB group had a higher incidence of in-hospital mortality (37.1% vs 1.7%, P<.001). Admission systolic blood pressure and tricuspid annular plane systolic excursion were lower in the high CK-MB group. In the receiver-operating characteristic curve analysis, a CK-MB value of more than 31.5 U/L yielded a sensitivity of 86.7% and specificity of 83.5% for predicting in-hospital mortality. During long-term follow-up, recurrent PE, major and minor bleeding, and mortality rates were similar in both groups. Creatinine kinase isoenzyme-MB is a simple, widely available, and useful biomarker for predicting adverse in-hospital clinical outcomes in PE. Copyright © 2015 Elsevier Inc. All rights reserved.
Shakespear, J S; Blom, D; Huprich, J E; Peters, J H
2004-03-01
Ineffective esophageal motility disorder (IEM) is a new, manometrically defined, esophageal motility disorder, associated with severe gastroesophageal reflux disease (GERD), GERD-associated respiratory symptoms, delayed acid clearance, and mucosal injury. Videoesophagram is an important, inexpensive, and widely available tool in the diagnostic evaluation of patients with esophageal pathologies. The efficacy of videoesophagography has not been rigorously examined in patients with IEM. The aim of this study was to determine the diagnostic value of videoesophagography in patients with IEM. The radiographic and manometric findings of 202 consecutive patients presenting with foregut symptoms were evaluated. IEM was defined by strict manometric criteria. All other named motility disorders such as achalasia were excluded. Videoesophagography was performed according to a standard protocol. Of patients in this cohort, 16% (33/202) had IEM by manometric criteria. Of IEM patients, 55% (18/33) had an abnormal videoesophagram, while in 45% (15/33) this test was read as normal. Only 11% (15/137) of patients with a normal videoesophagram were found to have IEM. Sensitivity of videoesophagram was 54.6%, specificity 72.2%, positive predictive value only 27.7%, and negative predictive value 89.1% in the diagnosis of IEM. These data show that videoesophagram is relatively insensitive in detecting patients with IEM and should not be considered a valid diagnostic test for this disorder. We conclude that esophageal manometry is an indispensable diagnostic modality in the workup of a patient with suspected of IEM.
Quantifying Information Gain from Dynamic Downscaling Experiments
NASA Astrophysics Data System (ADS)
Tian, Y.; Peters-Lidard, C. D.
2015-12-01
Dynamic climate downscaling experiments are designed to produce information at higher spatial and temporal resolutions. Such additional information is generated from the low-resolution initial and boundary conditions via the predictive power of the physical laws. However, errors and uncertainties in the initial and boundary conditions can be propagated and even amplified to the downscaled simulations. Additionally, the limit of predictability in nonlinear dynamical systems will also damper the information gain, even if the initial and boundary conditions were error-free. Thus it is critical to quantitatively define and measure the amount of information increase from dynamic downscaling experiments, to better understand and appreciate their potentials and limitations. We present a scheme to objectively measure the information gain from such experiments. The scheme is based on information theory, and we argue that if a downscaling experiment is to exhibit value, it has to produce more information than what can be simply inferred from information sources already available. These information sources include the initial and boundary conditions, the coarse resolution model in which the higher-resolution models are embedded, and the same set of physical laws. These existing information sources define an "information threshold" as a function of the spatial and temporal resolution, and this threshold serves as a benchmark to quantify the information gain from the downscaling experiments, or any other approaches. For a downscaling experiment to shown any value, the information has to be above this threshold. A recent NASA-supported downscaling experiment is used as an example to illustrate the application of this scheme.
Prediction of Dementia in Primary Care Patients
Jessen, Frank; Wiese, Birgitt; Bickel, Horst; Eiffländer-Gorfer, Sandra; Fuchs, Angela; Kaduszkiewicz, Hanna; Köhler, Mirjam; Luck, Tobias; Mösch, Edelgard; Pentzek, Michael; Riedel-Heller, Steffi G.; Wagner, Michael; Weyerer, Siegfried; Maier, Wolfgang; van den Bussche, Hendrik
2011-01-01
Background Current approaches for AD prediction are based on biomarkers, which are however of restricted availability in primary care. AD prediction tools for primary care are therefore needed. We present a prediction score based on information that can be obtained in the primary care setting. Methodology/Principal Findings We performed a longitudinal cohort study in 3.055 non-demented individuals above 75 years recruited via primary care chart registries (Study on Aging, Cognition and Dementia, AgeCoDe). After the baseline investigation we performed three follow-up investigations at 18 months intervals with incident dementia as the primary outcome. The best set of predictors was extracted from the baseline variables in one randomly selected half of the sample. This set included age, subjective memory impairment, performance on delayed verbal recall and verbal fluency, on the Mini-Mental-State-Examination, and on an instrumental activities of daily living scale. These variables were aggregated to a prediction score, which achieved a prediction accuracy of 0.84 for AD. The score was applied to the second half of the sample (test cohort). Here, the prediction accuracy was 0.79. With a cut-off of at least 80% sensitivity in the first cohort, 79.6% sensitivity, 66.4% specificity, 14.7% positive predictive value (PPV) and 97.8% negative predictive value of (NPV) for AD were achieved in the test cohort. At a cut-off for a high risk population (5% of individuals with the highest risk score in the first cohort) the PPV for AD was 39.1% (52% for any dementia) in the test cohort. Conclusions The prediction score has useful prediction accuracy. It can define individuals (1) sensitively for low cost-low risk interventions, or (2) more specific and with increased PPV for measures of prevention with greater costs or risks. As it is independent of technical aids, it may be used within large scale prevention programs. PMID:21364746
Prediction of dementia in primary care patients.
Jessen, Frank; Wiese, Birgitt; Bickel, Horst; Eiffländer-Gorfer, Sandra; Fuchs, Angela; Kaduszkiewicz, Hanna; Köhler, Mirjam; Luck, Tobias; Mösch, Edelgard; Pentzek, Michael; Riedel-Heller, Steffi G; Wagner, Michael; Weyerer, Siegfried; Maier, Wolfgang; van den Bussche, Hendrik
2011-02-18
Current approaches for AD prediction are based on biomarkers, which are however of restricted availability in primary care. AD prediction tools for primary care are therefore needed. We present a prediction score based on information that can be obtained in the primary care setting. We performed a longitudinal cohort study in 3.055 non-demented individuals above 75 years recruited via primary care chart registries (Study on Aging, Cognition and Dementia, AgeCoDe). After the baseline investigation we performed three follow-up investigations at 18 months intervals with incident dementia as the primary outcome. The best set of predictors was extracted from the baseline variables in one randomly selected half of the sample. This set included age, subjective memory impairment, performance on delayed verbal recall and verbal fluency, on the Mini-Mental-State-Examination, and on an instrumental activities of daily living scale. These variables were aggregated to a prediction score, which achieved a prediction accuracy of 0.84 for AD. The score was applied to the second half of the sample (test cohort). Here, the prediction accuracy was 0.79. With a cut-off of at least 80% sensitivity in the first cohort, 79.6% sensitivity, 66.4% specificity, 14.7% positive predictive value (PPV) and 97.8% negative predictive value of (NPV) for AD were achieved in the test cohort. At a cut-off for a high risk population (5% of individuals with the highest risk score in the first cohort) the PPV for AD was 39.1% (52% for any dementia) in the test cohort. The prediction score has useful prediction accuracy. It can define individuals (1) sensitively for low cost-low risk interventions, or (2) more specific and with increased PPV for measures of prevention with greater costs or risks. As it is independent of technical aids, it may be used within large scale prevention programs.
Car Ownership and the Association between Fruit and Vegetable Availability and Diet
Bodor, J. Nicholas; Hutchinson, Paul L.; Rose, Donald
2013-01-01
Objective: Nearly all research on the food environment and diet has not accounted for car ownership – a potential key modifying factor. This study examined the modifying effect of car ownership on the relationship between neighborhood fruit and vegetable availability and intake. Methods: Data on respondents’ (n=760) fruit and vegetable intake, car ownership, and demographics came from the 2008 New Orleans Behavioral Risk Factor Surveillance System. Shelf space data on fresh, frozen, and canned fruits and vegetables were collected in 2008 from a random sample of New Orleans stores (n=114). Availability measures were constructed by summing the amount of fruit and vegetable shelf space in all stores within defined distances from respondent households. Regression analyses controlled for demographics and were run separately for respondents with and without a car. Results: Fruit and vegetable availability was positively associated with intake among non-car owners. An additional 100 meters of shelf space within 2 kilometers of a residence was predictive of a half-serving/day increase in fruit and vegetable intake. Availability was not associated with intake among car owners. Conclusions: Future research and interventions to increase neighborhood healthy food options, should consider car ownership rates in their target areas as an important modifying factor. PMID:24145203
TIMPs of parasitic helminths - a large-scale analysis of high-throughput sequence datasets.
Cantacessi, Cinzia; Hofmann, Andreas; Pickering, Darren; Navarro, Severine; Mitreva, Makedonka; Loukas, Alex
2013-05-30
Tissue inhibitors of metalloproteases (TIMPs) are a multifunctional family of proteins that orchestrate extracellular matrix turnover, tissue remodelling and other cellular processes. In parasitic helminths, such as hookworms, TIMPs have been proposed to play key roles in the host-parasite interplay, including invasion of and establishment in the vertebrate animal hosts. Currently, knowledge of helminth TIMPs is limited to a small number of studies on canine hookworms, whereas no information is available on the occurrence of TIMPs in other parasitic helminths causing neglected diseases. In the present study, we conducted a large-scale investigation of TIMP proteins of a range of neglected human parasites including the hookworm Necator americanus, the roundworm Ascaris suum, the liver flukes Clonorchis sinensis and Opisthorchis viverrini, as well as the schistosome blood flukes. This entailed mining available transcriptomic and/or genomic sequence datasets for the presence of homologues of known TIMPs, predicting secondary structures of defined protein sequences, systematic phylogenetic analyses and assessment of differential expression of genes encoding putative TIMPs in the developmental stages of A. suum, N. americanus and Schistosoma haematobium which infect the mammalian hosts. A total of 15 protein sequences with high homology to known eukaryotic TIMPs were predicted from the complement of sequence data available for parasitic helminths and subjected to in-depth bioinformatic analyses. Supported by the availability of gene manipulation technologies such as RNA interference and/or transgenesis, this work provides a basis for future functional explorations of helminth TIMPs and, in particular, of their role/s in fundamental biological pathways linked to long-term establishment in the vertebrate hosts, with a view towards the development of novel approaches for the control of neglected helminthiases.
TIMPs of parasitic helminths – a large-scale analysis of high-throughput sequence datasets
2013-01-01
Background Tissue inhibitors of metalloproteases (TIMPs) are a multifunctional family of proteins that orchestrate extracellular matrix turnover, tissue remodelling and other cellular processes. In parasitic helminths, such as hookworms, TIMPs have been proposed to play key roles in the host-parasite interplay, including invasion of and establishment in the vertebrate animal hosts. Currently, knowledge of helminth TIMPs is limited to a small number of studies on canine hookworms, whereas no information is available on the occurrence of TIMPs in other parasitic helminths causing neglected diseases. Methods In the present study, we conducted a large-scale investigation of TIMP proteins of a range of neglected human parasites including the hookworm Necator americanus, the roundworm Ascaris suum, the liver flukes Clonorchis sinensis and Opisthorchis viverrini, as well as the schistosome blood flukes. This entailed mining available transcriptomic and/or genomic sequence datasets for the presence of homologues of known TIMPs, predicting secondary structures of defined protein sequences, systematic phylogenetic analyses and assessment of differential expression of genes encoding putative TIMPs in the developmental stages of A. suum, N. americanus and Schistosoma haematobium which infect the mammalian hosts. Results A total of 15 protein sequences with high homology to known eukaryotic TIMPs were predicted from the complement of sequence data available for parasitic helminths and subjected to in-depth bioinformatic analyses. Conclusions Supported by the availability of gene manipulation technologies such as RNA interference and/or transgenesis, this work provides a basis for future functional explorations of helminth TIMPs and, in particular, of their role/s in fundamental biological pathways linked to long-term establishment in the vertebrate hosts, with a view towards the development of novel approaches for the control of neglected helminthiases. PMID:23721526
Jongeneelen, Frans J; Berge, Wil F Ten
2011-10-01
Physiologically based toxicokinetic (PBTK) models are computational tools, which simulate the absorption, distribution, metabolism, and excretion of chemicals. The purpose of this study was to develop a physiologically based pharmacokinetic (PBPK) model with a high level of transparency. The model should be able to predict blood and urine concentrations of environmental chemicals and metabolites, given a certain environmental or occupational exposure scenario. The model refers to a reference human of 70 kg. The partition coefficients of the parent compound and its metabolites (blood:air and tissue:blood partition coefficients of 11 organs) are estimated by means of quantitative structure-property relationship, in which five easily available physicochemical properties of the compound are the independent parameters. The model gives a prediction of the fate of the compound, based on easily available chemical properties; therefore, it can be applied as a generic model applicable to multiple compounds. Three routes of uptake are considered (inhalation, dermal, and/or oral) as well as two built-in exercise levels (at rest and at light work). Dermal uptake is estimated by the use of a dermal diffusion-based module that considers dermal deposition rate and duration of deposition. Moreover, evaporation during skin contact is fully accounted for and related to the volatility of the substance. Saturable metabolism according to Michaelis-Menten kinetics can be modelled in any of 11 organs/tissues or in liver only. Renal tubular resorption is based on a built-in algorithm, dependent on the (log) octanol:water partition coefficient. Enterohepatic circulation is optional at a user-defined rate. The generic PBTK model is available as a spreadsheet application in MS Excel. The differential equations of the model are programmed in Visual Basic. Output is presented as numerical listing over time in tabular form and in graphs. The MS Excel application of the PBTK model is available as freeware. The accuracy of the model prediction is illustrated by simulating experimental observations. Published experimental inhalation and dermal exposure studies on a series of different chemicals (pyrene, N-methyl-pyrrolidone, methyl-tert-butylether, heptane, 2-butoxyethanol, and ethanol) were selected to compare the observed data with the model-simulated data. The examples show that the model-predicted concentrations in blood and/or urine after inhalation and/or transdermal uptake have an accuracy of within an order of magnitude. It is advocated that this PBTK model, called IndusChemFate, is suitable for 'first tier assessments' and for early explorations of the fate of chemicals and/or metabolites in the human body. The availability of a simple model with a minimum burden of input information on the parent compound and its metabolites might be a stimulation to apply PBTK modelling more often in the field of biomonitoring and exposure science.
Enabling the use of climate model data in the Dutch climate effect community
NASA Astrophysics Data System (ADS)
Som de Cerff, Wim; Plieger, Maarten
2010-05-01
Within the climate effect community the usage of climate model data is emerging. Where mostly climate time series and weather generators were used, there is a shift to incorporate climate model data into climate effect models. The use of climate model data within the climate effect models is difficult, due to missing metadata, resolution and projection issues, data formats and availability of the parameters of interest. Often the climate effect modelers are not aware of available climate model data or are not aware of how they can use it. Together with seven other partners (CERFACS, CNR-IPSL, SMHI, INHGA, CMCC, WUR, MF-CNRM), KNMI is involved in the FP7 IS ENES (http://www.enes.org) project work package 10/JRA5 ‘Bridging Climate Research Data and the Needs of the Impact Community. The aims of this work package are to enhance the use of Climate Research Data and to enhance the interaction with climate effect/impact communities. Phase one is to define use cases together with the Dutch climate effect community, which describe the intended use of climate model data in climate effect models. We defined four use cases: 1) FEWS hydrological Framework (Deltares) 2) METAPHOR, a plants and species dispersion model (Wageningen University) 3) Natuurplanner, an Ecological model suite (Wageningen University) 4) Land use models (Free University/JRC). Also the other partners in JRA5 have defined use cases, which are representative for the climate effect and impact communities in their country. Goal is to find commonalities between all defined use cases. The common functionality will be implemented as e-tools and incorporated in the IS-ENES data portal. Common issues relate to e.g., need for high resolution: downscaling from GCM to local scale (also involves interpolation); parameter selection; finding extremes; averaging methods. At the conference we will describe the FEWS case in more detail: Delft FEWS is an open shell system (in development since 1995) for performing hydrological predictions and the handling of time series data. The most important capabilities of FEWS are importing of meteorological and hydrological data and organizing the workflows of the different models which can be used within FEWS, like the Netherlands Hydrological Instrumentarium (NHI). Besides predictions, the system is currently being used for hydrological climate effects studies. Currently regionally downscaled data are used, but using model data will be the next step. This coupling of climate model data to FEWS will open a wider rage of climate impact and effect research, but it is a difficult task to accomplish. Issues to be dealt with are: regridding, downscaling, format conversion, extraction of required data and addition of descriptive metadata, including quality and uncertainty parameters. Finding an appropriate solution involves several iterations: first, the use case was defined, then we just provided a single data file containing some data of interest provided via FTP, next this data was offered through OGC services. Currently we are working on providing larger datasets and improving on the parameters and metadata. We will present the results (e-tools/data) and experiences gained on implementing the described use cases. Note that we are currently using experimental data, as the official climate model runs are not available yet.
Improved high-dimensional prediction with Random Forests by the use of co-data.
Te Beest, Dennis E; Mes, Steven W; Wilting, Saskia M; Brakenhoff, Ruud H; van de Wiel, Mark A
2017-12-28
Prediction in high dimensional settings is difficult due to the large number of variables relative to the sample size. We demonstrate how auxiliary 'co-data' can be used to improve the performance of a Random Forest in such a setting. Co-data are incorporated in the Random Forest by replacing the uniform sampling probabilities that are used to draw candidate variables by co-data moderated sampling probabilities. Co-data here are defined as any type information that is available on the variables of the primary data, but does not use its response labels. These moderated sampling probabilities are, inspired by empirical Bayes, learned from the data at hand. We demonstrate the co-data moderated Random Forest (CoRF) with two examples. In the first example we aim to predict the presence of a lymph node metastasis with gene expression data. We demonstrate how a set of external p-values, a gene signature, and the correlation between gene expression and DNA copy number can improve the predictive performance. In the second example we demonstrate how the prediction of cervical (pre-)cancer with methylation data can be improved by including the location of the probe relative to the known CpG islands, the number of CpG sites targeted by a probe, and a set of p-values from a related study. The proposed method is able to utilize auxiliary co-data to improve the performance of a Random Forest.
Improved therapy-success prediction with GSS estimated from clinical HIV-1 sequences.
Pironti, Alejandro; Pfeifer, Nico; Kaiser, Rolf; Walter, Hauke; Lengauer, Thomas
2014-01-01
Rules-based HIV-1 drug-resistance interpretation (DRI) systems disregard many amino-acid positions of the drug's target protein. The aims of this study are (1) the development of a drug-resistance interpretation system that is based on HIV-1 sequences from clinical practice rather than hard-to-get phenotypes, and (2) the assessment of the benefit of taking all available amino-acid positions into account for DRI. A dataset containing 34,934 therapy-naïve and 30,520 drug-exposed HIV-1 pol sequences with treatment history was extracted from the EuResist database and the Los Alamos National Laboratory database. 2,550 therapy-change-episode baseline sequences (TCEB) were assigned to test set A. Test set B contains 1,084 TCEB from the HIVdb TCE repository. Sequences from patients absent in the test sets were used to train three linear support vector machines to produce scores that predict drug exposure pertaining to each of 20 antiretrovirals: the first one uses the full amino-acid sequences (DEfull), the second one only considers IAS drug-resistance positions (DEonlyIAS), and the third one disregards IAS drug-resistance positions (DEnoIAS). For performance comparison, test sets A and B were evaluated with DEfull, DEnoIAS, DEonlyIAS, geno2pheno[resistance], HIVdb, ANRS, HIV-GRADE, and REGA. Clinically-validated cut-offs were used to convert the continuous output of the first four methods into susceptible-intermediate-resistant (SIR) predictions. With each method, a genetic susceptibility score (GSS) was calculated for each therapy episode in each test set by converting the SIR prediction for its compounds to integer: S=2, I=1, and R=0. The GSS were used to predict therapy success as defined by the EuResist standard datum definition. Statistical significance was assessed using a Wilcoxon signed-rank test. A comparison of the therapy-success prediction performances among the different interpretation systems for test set A can be found in Table 1, while those for test set B are found in Figure 1. Therapy-success prediction of first-line therapies with DEnoIAS performed better than DEonlyIAS (p<10-16). Therapy success prediction benefits from the consideration of all available mutations. The increase in performance was largest in first-line therapies with transmitted drug-resistance mutations.
Yurkovich, James T.; Yang, Laurence; Palsson, Bernhard O.; ...
2017-03-06
Deep-coverage metabolomic profiling has revealed a well-defined development of metabolic decay in human red blood cells (RBCs) under cold storage conditions. A set of extracellular biomarkers has been recently identified that reliably defines the qualitative state of the metabolic network throughout this metabolic decay process. Here, we extend the utility of these biomarkers by using them to quantitatively predict the concentrations of other metabolites in the red blood cell. We are able to accurately predict the concentration profile of 84 of the 91 (92%) measured metabolites ( p < 0.05) in RBC metabolism using only measurements of these five biomarkers.more » The median of prediction errors (symmetric mean absolute percent error) across all metabolites was 13%. Furthermore, the ability to predict numerous metabolite concentrations from a simple set of biomarkers offers the potential for the development of a powerful workflow that could be used to evaluate the metabolic state of a biological system using a minimal set of measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yurkovich, James T.; Yang, Laurence; Palsson, Bernhard O.
Deep-coverage metabolomic profiling has revealed a well-defined development of metabolic decay in human red blood cells (RBCs) under cold storage conditions. A set of extracellular biomarkers has been recently identified that reliably defines the qualitative state of the metabolic network throughout this metabolic decay process. Here, we extend the utility of these biomarkers by using them to quantitatively predict the concentrations of other metabolites in the red blood cell. We are able to accurately predict the concentration profile of 84 of the 91 (92%) measured metabolites ( p < 0.05) in RBC metabolism using only measurements of these five biomarkers.more » The median of prediction errors (symmetric mean absolute percent error) across all metabolites was 13%. Furthermore, the ability to predict numerous metabolite concentrations from a simple set of biomarkers offers the potential for the development of a powerful workflow that could be used to evaluate the metabolic state of a biological system using a minimal set of measurements.« less
A new multi-scale geomorphological landscape GIS for the Netherlands
NASA Astrophysics Data System (ADS)
Weerts, Henk; Kosian, Menne; Baas, Henk; Smit, Bjorn
2013-04-01
At present, the Cultural Heritage Agency of the Netherlands is developing a nationwide landscape Geographical Information System (GIS). In this new conceptual approach, the Agency puts together several multi-scale landscape classifications in a GIS. The natural physical landscapes lie at the basis of this GIS, because these landscapes provide the natural boundary conditions for anthropogenic. At the local scale a nationwide digital geomorphological GIS is available in the Netherlands. This map, that was originally mapped at 1:50,000 from the late 1970's to the 1990's, is based on geomorphometrical (observable and measurable in the field), geomorphological and, lithological and geochronological criteria. When used at a national scale, the legend of this comprehensive geomorphological map is very complex which hampers use in e.g. planning practice or predictive archaeology. At the national scale several landscape classifications have been in use in the Netherlands since the early 1950's, typically ranging in the order of 10 -15 landscape units for the entire country. A widely used regional predictive archaeological classification has 13 archaeo-landscapes. All these classifications have been defined "top-down" and their actual content and boundaries have only been broadly defined. Thus, these classifications have little or no meaning at a local scale. We have tried to combine the local scale with the national scale. To do so, we first defined national physical geographical regions based on the new 2010 national geological map 1:500,000. We also made sure there was a reference with the European LANMAP2 classification. We arrived at 20 landscape units at the national scale, based on (1) genesis, (2) large-scale geomorphology, (3) lithology of the shallow sub-surface and (4) age. These criteria that were chosen because the genesis of the landscape largely determines its (scale of) morphology and lithology that in turn determine hydrological conditions. All together, they define the natural boundary conditions for anthropogenic use. All units have been defined, mapped and described based on these criteria. This enables the link with the European LANMAP2 GIS. The unit "Till-plateau sand region" for instance runs deep into Germany and even Poland. At the local scale, the boundaries of the national units can be defined and precisely mapped by linking them to the 1:50,000 geomorphological map polygons. Each national unit consists of a typical assemblage of local geomorphological units. So, the newly developed natural physical landscape map layer can be used from the local to the European scale.
Rusyn, Ivan; Sedykh, Alexander; Guyton, Kathryn Z.; Tropsha, Alexander
2012-01-01
Quantitative structure-activity relationship (QSAR) models are widely used for in silico prediction of in vivo toxicity of drug candidates or environmental chemicals, adding value to candidate selection in drug development or in a search for less hazardous and more sustainable alternatives for chemicals in commerce. The development of traditional QSAR models is enabled by numerical descriptors representing the inherent chemical properties that can be easily defined for any number of molecules; however, traditional QSAR models often have limited predictive power due to the lack of data and complexity of in vivo endpoints. Although it has been indeed difficult to obtain experimentally derived toxicity data on a large number of chemicals in the past, the results of quantitative in vitro screening of thousands of environmental chemicals in hundreds of experimental systems are now available and continue to accumulate. In addition, publicly accessible toxicogenomics data collected on hundreds of chemicals provide another dimension of molecular information that is potentially useful for predictive toxicity modeling. These new characteristics of molecular bioactivity arising from short-term biological assays, i.e., in vitro screening and/or in vivo toxicogenomics data can now be exploited in combination with chemical structural information to generate hybrid QSAR–like quantitative models to predict human toxicity and carcinogenicity. Using several case studies, we illustrate the benefits of a hybrid modeling approach, namely improvements in the accuracy of models, enhanced interpretation of the most predictive features, and expanded applicability domain for wider chemical space coverage. PMID:22387746
Cai, Tommaso; Conti, Gloria; Nesi, Gabriella; Lorenzini, Matteo; Mondaini, Nicola; Bartoletti, Riccardo
2007-10-01
The objective of our study was to define a neural network for predicting recurrence and progression-free probability in patients affected by recurrent pTaG3 urothelial bladder cancer to use in everyday clinical practice. Among all patients who had undergone transurethral resection for bladder tumors, 143 were finally selected and enrolled. Four follow-ups for recurrence, progression or survival were performed at 6, 9, 12 and 108 months. The data were analyzed by using the commercially available software program NeuralWorks Predict. These data were compared with univariate and multivariate analysis results. The use of Artificial Neural Networks (ANN) in recurrent pTaG3 patients showed a sensitivity of 81.67% and specificity of 95.87% in predicting recurrence-free status after transurethral resection of bladder tumor at 12 months follow-up. Statistical and ANN analyses allowed selection of the number of lesions (multiple, HR=3.31, p=0.008) and the previous recurrence rate (>or=2/year, HR=3.14, p=0.003) as the most influential variables affecting the output decision in predicting the natural history of recurrent pTaG3 urothelial bladder cancer. ANN applications also included selection of the previous adjuvant therapy. We demonstrated the feasibility and reliability of ANN applications in everyday clinical practice, reporting a good recurrence predicting performance. The study identified a single subgroup of pTaG3 patients with multiple lesions, >or=2/year recurrence rate and without any response to previous Bacille Calmette-Guérin adjuvant therapy, that seem to be at high risk of recurrence.
Thomas, Minta; De Brabanter, Kris; De Moor, Bart
2014-05-10
DNA microarrays are potentially powerful technology for improving diagnostic classification, treatment selection, and prognostic assessment. The use of this technology to predict cancer outcome has a history of almost a decade. Disease class predictors can be designed for known disease cases and provide diagnostic confirmation or clarify abnormal cases. The main input to this class predictors are high dimensional data with many variables and few observations. Dimensionality reduction of these features set significantly speeds up the prediction task. Feature selection and feature transformation methods are well known preprocessing steps in the field of bioinformatics. Several prediction tools are available based on these techniques. Studies show that a well tuned Kernel PCA (KPCA) is an efficient preprocessing step for dimensionality reduction, but the available bandwidth selection method for KPCA was computationally expensive. In this paper, we propose a new data-driven bandwidth selection criterion for KPCA, which is related to least squares cross-validation for kernel density estimation. We propose a new prediction model with a well tuned KPCA and Least Squares Support Vector Machine (LS-SVM). We estimate the accuracy of the newly proposed model based on 9 case studies. Then, we compare its performances (in terms of test set Area Under the ROC Curve (AUC) and computational time) with other well known techniques such as whole data set + LS-SVM, PCA + LS-SVM, t-test + LS-SVM, Prediction Analysis of Microarrays (PAM) and Least Absolute Shrinkage and Selection Operator (Lasso). Finally, we assess the performance of the proposed strategy with an existing KPCA parameter tuning algorithm by means of two additional case studies. We propose, evaluate, and compare several mathematical/statistical techniques, which apply feature transformation/selection for subsequent classification, and consider its application in medical diagnostics. Both feature selection and feature transformation perform well on classification tasks. Due to the dynamic selection property of feature selection, it is hard to define significant features for the classifier, which predicts classes of future samples. Moreover, the proposed strategy enjoys a distinctive advantage with its relatively lesser time complexity.
Schwenke, Michael; Strehlow, Jan; Demedts, Daniel; Haase, Sabrina; Barrios Romero, Diego; Rothlübbers, Sven; von Dresky, Caroline; Zidowitz, Stephan; Georgii, Joachim; Mihcin, Senay; Bezzi, Mario; Tanner, Christine; Sat, Giora; Levy, Yoav; Jenne, Jürgen; Günther, Matthias; Melzer, Andreas; Preusser, Tobias
2017-01-01
Focused ultrasound (FUS) is entering clinical routine as a treatment option. Currently, no clinically available FUS treatment system features automated respiratory motion compensation. The required quality standards make developing such a system challenging. A novel FUS treatment system with motion compensation is described, developed with the goal of clinical use. The system comprises a clinically available MR device and FUS transducer system. The controller is very generic and could use any suitable MR or FUS device. MR image sequences (echo planar imaging) are acquired for both motion observation and thermometry. Based on anatomical feature tracking, motion predictions are estimated to compensate for processing delays. FUS control parameters are computed repeatedly and sent to the hardware to steer the focus to the (estimated) target position. All involved calculations produce individually known errors, yet their impact on therapy outcome is unclear. This is solved by defining an intuitive quality measure that compares the achieved temperature to the static scenario, resulting in an overall efficiency with respect to temperature rise. To allow for extensive testing of the system over wide ranges of parameters and algorithmic choices, we replace the actual MR and FUS devices by a virtual system. It emulates the hardware and, using numerical simulations of FUS during motion, predicts the local temperature rise in the tissue resulting from the controls it receives. With a clinically available monitoring image rate of 6.67 Hz and 20 FUS control updates per second, normal respiratory motion is estimated to be compensable with an estimated efficiency of 80%. This reduces to about 70% for motion scaled by 1.5. Extensive testing (6347 simulated sonications) over wide ranges of parameters shows that the main source of error is the temporal motion prediction. A history-based motion prediction method performs better than a simple linear extrapolator. The estimated efficiency of the new treatment system is already suited for clinical applications. The simulation-based in-silico testing as a first-stage validation reduces the efforts of real-world testing. Due to the extensible modular design, the described approach might lead to faster translations from research to clinical practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wales, D. J., E-mail: dw34@cam.ac.uk
This perspective focuses on conceptual and computational aspects of the potential energy landscape framework. It has two objectives: first to summarise some key developments of the approach and second to illustrate how such techniques can be applied using a specific example that exploits knowledge of pathways. Recent developments in theory and simulation within the landscape framework are first outlined, including methods for structure prediction, analysis of global thermodynamic properties, and treatment of rare event dynamics. We then develop a connection between the kinetic transition network treatment of dynamics and a potential of mean force defined by a reaction coordinate. Themore » effect of projection from the full configuration space to low dimensionality is illustrated for an atomic cluster. In this example, where a relatively successful structural order parameter is available, the principal change in cluster morphology is reproduced, but some details are not faithfully represented. In contrast, a profile based on configurations that correspond to the discrete path defined geometrically retains all the barriers and minima. This comparison provides insight into the physical origins of “friction” effects in low-dimensionality descriptions of dynamics based upon a reaction coordinate.« less
Hucka, Michael; Bergmann, Frank T.; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M.; Le Novére, Nicolas; Myers, Chris J.; Olivier, Brett G.; Sahle, Sven; Schaff, James C.; Smith, Lucian P.; Waltemath, Dagmar; Wilkinson, Darren J.
2017-01-01
Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/. PMID:26528569
The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core
Hucka, Michael; Bergmann, Frank T.; Hoops, Stefan; Keating, Sarah M.; Sahle, Sven; Schaff, James C.; Smith, Lucian P.; Wilkinson, Darren J.
2017-01-01
Summary Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/. PMID:26528564
Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J
2015-09-04
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org.
The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core.
Hucka, Michael; Bergmann, Frank T; Hoops, Stefan; Keating, Sarah M; Sahle, Sven; Schaff, James C; Smith, Lucian P; Wilkinson, Darren J
2015-09-04
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
Thomas, Xavier; Raffoux, Emmanuel; Renneville, Aline; Pautas, Cécile; de Botton, Stéphane; de Revel, Thierry; Reman, Oumedaly; Terré, Christine; Gardin, Claude; Chelghoum, Youcef; Boissel, Nicolas; Quesnel, Bruno; Cordonnier, Catherine; Bourhis, Jean-Henri; Elhamri, Mohamed; Fenaux, Pierre; Preudhomme, Claude; Socié, Gérard; Michallet, Mauricette; Castaigne, Sylvie; Dombret, Hervé
2012-09-01
Forty-seven percent of adults with acute myeloid leukemia (AML) who entered the ALFA-9802 trial and achieved a first complete remission (CR) experienced a first relapse. We examined the outcome of these 190 adult patients. Eighty-four patients (44%) achieved a second CR. The median overall survival (OS) after relapse was 8.9 months with a 2-year OS at 25%. Factors predicting a better outcome after relapse were stem cell transplant (SCT) performed in second CR and a first CR duration >1 year. Risk groups defined at the time of diagnosis and treatment received in first CR also influenced the outcome after relapse. The best results were obtained in patients with core binding factor (CBF)-AML, while patients initially defined as favorable intermediate risk showed a similar outcome after relapse than those initially entering the poor risk group. We conclude that most adult patients with recurring AML could not be rescued using current available therapies, although allogeneic SCT remains the best therapeutic option at this stage of the disease. Copyright © 2012 Elsevier Ltd. All rights reserved.
The Systems Biology Markup Language (SBML): Language Specification for Level 3 Version 1 Core.
Hucka, Michael; Bergmann, Frank T; Hoops, Stefan; Keating, Sarah M; Sahle, Sven; Schaff, James C; Smith, Lucian P; Wilkinson, Darren J
2015-06-01
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 1 of SBML Level 3 Core. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
Hucka, Michael; Bergmann, Frank T; Dräger, Andreas; Hoops, Stefan; Keating, Sarah M; Le Novère, Nicolas; Myers, Chris J; Olivier, Brett G; Sahle, Sven; Schaff, James C; Smith, Lucian P; Waltemath, Dagmar; Wilkinson, Darren J
2015-06-01
Computational models can help researchers to interpret data, understand biological function, and make quantitative predictions. The Systems Biology Markup Language (SBML) is a file format for representing computational models in a declarative form that can be exchanged between different software systems. SBML is oriented towards describing biological processes of the sort common in research on a number of topics, including metabolic pathways, cell signaling pathways, and many others. By supporting SBML as an input/output format, different tools can all operate on an identical representation of a model, removing opportunities for translation errors and assuring a common starting point for analyses and simulations. This document provides the specification for Version 5 of SBML Level 2. The specification defines the data structures prescribed by SBML as well as their encoding in XML, the eXtensible Markup Language. This specification also defines validation rules that determine the validity of an SBML document, and provides many examples of models in SBML form. Other materials and software are available from the SBML project web site, http://sbml.org/.
Mechanics Model for Simulating RC Hinges under Reversed Cyclic Loading
Shukri, Ahmad Azim; Visintin, Phillip; Oehlers, Deric J.; Jumaat, Mohd Zamin
2016-01-01
Describing the moment rotation (M/θ) behavior of reinforced concrete (RC) hinges is essential in predicting the behavior of RC structures under severe loadings, such as under cyclic earthquake motions and blast loading. The behavior of RC hinges is defined by localized slip or partial interaction (PI) behaviors in both the tension and compression region. In the tension region, slip between the reinforcement and the concrete defines crack spacing, crack opening and closing, and tension stiffening. While in the compression region, slip along concrete to concrete interfaces defines the formation and failure of concrete softening wedges. Being strain-based, commonly-applied analysis techniques, such as the moment curvature approach, cannot directly simulate these PI behaviors because they are localized and displacement based. Therefore, strain-based approaches must resort to empirical factors to define behaviors, such as tension stiffening and concrete softening hinge lengths. In this paper, a displacement-based segmental moment rotation approach, which directly simulates the partial interaction behaviors in both compression and tension, is developed for predicting the M/θ response of an RC beam hinge under cyclic loading. Significantly, in order to develop the segmental approach, a partial interaction model to predict the tension stiffening load slip relationship between the reinforcement and the concrete is developed. PMID:28773430
Mechanics Model for Simulating RC Hinges under Reversed Cyclic Loading.
Shukri, Ahmad Azim; Visintin, Phillip; Oehlers, Deric J; Jumaat, Mohd Zamin
2016-04-22
Describing the moment rotation (M/θ) behavior of reinforced concrete (RC) hinges is essential in predicting the behavior of RC structures under severe loadings, such as under cyclic earthquake motions and blast loading. The behavior of RC hinges is defined by localized slip or partial interaction (PI) behaviors in both the tension and compression region. In the tension region, slip between the reinforcement and the concrete defines crack spacing, crack opening and closing, and tension stiffening. While in the compression region, slip along concrete to concrete interfaces defines the formation and failure of concrete softening wedges. Being strain-based, commonly-applied analysis techniques, such as the moment curvature approach, cannot directly simulate these PI behaviors because they are localized and displacement based. Therefore, strain-based approaches must resort to empirical factors to define behaviors, such as tension stiffening and concrete softening hinge lengths. In this paper, a displacement-based segmental moment rotation approach, which directly simulates the partial interaction behaviors in both compression and tension, is developed for predicting the M/θ response of an RC beam hinge under cyclic loading. Significantly, in order to develop the segmental approach, a partial interaction model to predict the tension stiffening load slip relationship between the reinforcement and the concrete is developed.
NASA Astrophysics Data System (ADS)
Hook, Vivian; Lietz, Christopher B.; Podvin, Sonia; Cajka, Tomas; Fiehn, Oliver
2018-05-01
Neuropeptides are short peptides in the range of 3-40 residues that are secreted for cell-cell communication in neuroendocrine systems. In the nervous system, neuropeptides comprise the largest group of neurotransmitters. In the endocrine system, neuropeptides function as peptide hormones to coordinate intercellular signaling among target physiological systems. The diversity of neuropeptide functions is defined by their distinct primary sequences, peptide lengths, proteolytic processing of pro-neuropeptide precursors, and covalent modifications. Global, untargeted neuropeptidomics mass spectrometry is advantageous for defining the structural features of the thousands to tens of thousands of neuropeptides present in biological systems. Defining neuropeptide structures is the basis for defining the proteolytic processing pathways that convert pro-neuropeptides into active peptides. Neuropeptidomics has revealed that processing of pro-neuropeptides occurs at paired basic residues sites, and at non-basic residue sites. Processing results in neuropeptides with known functions and generates novel peptides representing intervening peptide domains flanked by dibasic residue processing sites, identified by neuropeptidomics. While very short peptide products of 2-4 residues are predicted from pro-neuropeptide dibasic processing sites, such peptides have not been readily identified; therefore, it will be logical to utilize metabolomics to identify very short peptides with neuropeptidomics in future studies. Proteolytic processing is accompanied by covalent post-translational modifications (PTMs) of neuropeptides comprising C-terminal amidation, N-terminal pyroglutamate, disulfide bonds, phosphorylation, sulfation, acetylation, glycosylation, and others. Neuropeptidomics can define PTM features of neuropeptides. In summary, neuropeptidomics for untargeted, global analyses of neuropeptides is essential for elucidation of proteases that generate diverse neuropeptides for cell-cell signaling. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Hook, Vivian; Lietz, Christopher B.; Podvin, Sonia; Cajka, Tomas; Fiehn, Oliver
2018-04-01
Neuropeptides are short peptides in the range of 3-40 residues that are secreted for cell-cell communication in neuroendocrine systems. In the nervous system, neuropeptides comprise the largest group of neurotransmitters. In the endocrine system, neuropeptides function as peptide hormones to coordinate intercellular signaling among target physiological systems. The diversity of neuropeptide functions is defined by their distinct primary sequences, peptide lengths, proteolytic processing of pro-neuropeptide precursors, and covalent modifications. Global, untargeted neuropeptidomics mass spectrometry is advantageous for defining the structural features of the thousands to tens of thousands of neuropeptides present in biological systems. Defining neuropeptide structures is the basis for defining the proteolytic processing pathways that convert pro-neuropeptides into active peptides. Neuropeptidomics has revealed that processing of pro-neuropeptides occurs at paired basic residues sites, and at non-basic residue sites. Processing results in neuropeptides with known functions and generates novel peptides representing intervening peptide domains flanked by dibasic residue processing sites, identified by neuropeptidomics. While very short peptide products of 2-4 residues are predicted from pro-neuropeptide dibasic processing sites, such peptides have not been readily identified; therefore, it will be logical to utilize metabolomics to identify very short peptides with neuropeptidomics in future studies. Proteolytic processing is accompanied by covalent post-translational modifications (PTMs) of neuropeptides comprising C-terminal amidation, N-terminal pyroglutamate, disulfide bonds, phosphorylation, sulfation, acetylation, glycosylation, and others. Neuropeptidomics can define PTM features of neuropeptides. In summary, neuropeptidomics for untargeted, global analyses of neuropeptides is essential for elucidation of proteases that generate diverse neuropeptides for cell-cell signaling. [Figure not available: see fulltext.
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Carney, Kelly S.; Dubois, Paul; Hoffarth, Canio; Khaled, Bilal; Shyamsunder, Loukham; Rajan, Subramaniam; Blankenhorn, Gunther
2017-01-01
The need for accurate material models to simulate the deformation, damage and failure of polymer matrix composites under impact conditions is becoming critical as these materials are gaining increased use in the aerospace and automotive communities. The aerospace community has identified several key capabilities which are currently lacking in the available material models in commercial transient dynamic finite element codes. To attempt to improve the predictive capability of composite impact simulations, a next generation material model is being developed for incorporation within the commercial transient dynamic finite element code LS-DYNA. The material model, which incorporates plasticity, damage and failure, utilizes experimentally based tabulated input to define the evolution of plasticity and damage and the initiation of failure as opposed to specifying discrete input parameters such as modulus and strength. The plasticity portion of the orthotropic, three-dimensional, macroscopic composite constitutive model is based on an extension of the Tsai-Wu composite failure model into a generalized yield function with a non-associative flow rule. For the damage model, a strain equivalent formulation is used to allow for the uncoupling of the deformation and damage analyses. For the failure model, a tabulated approach is utilized in which a stress or strain based invariant is defined as a function of the location of the current stress state in stress space to define the initiation of failure. Failure surfaces can be defined with any arbitrary shape, unlike traditional failure models where the mathematical functions used to define the failure surface impose a specific shape on the failure surface. In the current paper, the complete development of the failure model is described and the generation of a tabulated failure surface for a representative composite material is discussed.
Implementation of a Space Weather VOEvent service at IRAP in the frame of Europlanet H2020 PSWS
NASA Astrophysics Data System (ADS)
Gangloff, M.; André, N.; Génot, V.; Cecconi, B.; Le Sidaner, P.; Bouchemit, M.; Budnik, E.; Jourdane, N.
2017-09-01
Under Horizon 2020, the Europlanet Research Infrastructure includes PSWS (Planetary Space Weather Services), a set of new services that extend the concepts of space weather and space situation awareness to other planets of our solar system. One of these services is an Alert service associated in particular with an heliospheric propagator tool for solar wind predictions at planets, a meteor shower prediction tool, and a cometary tail crossing prediction tool. This Alert service, is based on VOEvent, an international standard proposed by the IVOA and widely used by the astronomy community. The VOEvent standard provides a means of describing transient celestial events in a machine-readable format. VOEvent is associated with VTP, the VOEvent Transfer Protocol that defines the system by which VOEvents may be disseminated to the community This presentation will focus on the enhancements of the VOEvent standard necessary to take into account the needs of the Solar System community and Comet, a freely available and open source implementation of VTP used by PSWS for its Alert service. Comet is implemented by several partners of PSWS, including IRAP and Observatoire de Paris. A use case will be presented for the heliospheric propagator tool based on extreme solar wind pressure pulses predicted at planets and probes from a 1D MHD model and real time observations of solar wind parameters.
kmer-SVM: a web server for identifying predictive regulatory sequence features in genomic data sets
Fletez-Brant, Christopher; Lee, Dongwon; McCallion, Andrew S.; Beer, Michael A.
2013-01-01
Massively parallel sequencing technologies have made the generation of genomic data sets a routine component of many biological investigations. For example, Chromatin immunoprecipitation followed by sequence assays detect genomic regions bound (directly or indirectly) by specific factors, and DNase-seq identifies regions of open chromatin. A major bottleneck in the interpretation of these data is the identification of the underlying DNA sequence code that defines, and ultimately facilitates prediction of, these transcription factor (TF) bound or open chromatin regions. We have recently developed a novel computational methodology, which uses a support vector machine (SVM) with kmer sequence features (kmer-SVM) to identify predictive combinations of short transcription factor-binding sites, which determine the tissue specificity of these genomic assays (Lee, Karchin and Beer, Discriminative prediction of mammalian enhancers from DNA sequence. Genome Res. 2011; 21:2167–80). This regulatory information can (i) give confidence in genomic experiments by recovering previously known binding sites, and (ii) reveal novel sequence features for subsequent experimental testing of cooperative mechanisms. Here, we describe the development and implementation of a web server to allow the broader research community to independently apply our kmer-SVM to analyze and interpret their genomic datasets. We analyze five recently published data sets and demonstrate how this tool identifies accessory factors and repressive sequence elements. kmer-SVM is available at http://kmersvm.beerlab.org. PMID:23771147
In silico toxicology protocols.
Myatt, Glenn J; Ahlberg, Ernst; Akahori, Yumi; Allen, David; Amberg, Alexander; Anger, Lennart T; Aptula, Aynur; Auerbach, Scott; Beilke, Lisa; Bellion, Phillip; Benigni, Romualdo; Bercu, Joel; Booth, Ewan D; Bower, Dave; Brigo, Alessandro; Burden, Natalie; Cammerer, Zoryana; Cronin, Mark T D; Cross, Kevin P; Custer, Laura; Dettwiler, Magdalena; Dobo, Krista; Ford, Kevin A; Fortin, Marie C; Gad-McDonald, Samantha E; Gellatly, Nichola; Gervais, Véronique; Glover, Kyle P; Glowienke, Susanne; Van Gompel, Jacky; Gutsell, Steve; Hardy, Barry; Harvey, James S; Hillegass, Jedd; Honma, Masamitsu; Hsieh, Jui-Hua; Hsu, Chia-Wen; Hughes, Kathy; Johnson, Candice; Jolly, Robert; Jones, David; Kemper, Ray; Kenyon, Michelle O; Kim, Marlene T; Kruhlak, Naomi L; Kulkarni, Sunil A; Kümmerer, Klaus; Leavitt, Penny; Majer, Bernhard; Masten, Scott; Miller, Scott; Moser, Janet; Mumtaz, Moiz; Muster, Wolfgang; Neilson, Louise; Oprea, Tudor I; Patlewicz, Grace; Paulino, Alexandre; Lo Piparo, Elena; Powley, Mark; Quigley, Donald P; Reddy, M Vijayaraj; Richarz, Andrea-Nicole; Ruiz, Patricia; Schilter, Benoit; Serafimova, Rositsa; Simpson, Wendy; Stavitskaya, Lidiya; Stidl, Reinhard; Suarez-Rodriguez, Diana; Szabo, David T; Teasdale, Andrew; Trejo-Martin, Alejandra; Valentin, Jean-Pierre; Vuorinen, Anna; Wall, Brian A; Watts, Pete; White, Angela T; Wichard, Joerg; Witt, Kristine L; Woolley, Adam; Woolley, David; Zwickl, Craig; Hasselgren, Catrin
2018-07-01
The present publication surveys several applications of in silico (i.e., computational) toxicology approaches across different industries and institutions. It highlights the need to develop standardized protocols when conducting toxicity-related predictions. This contribution articulates the information needed for protocols to support in silico predictions for major toxicological endpoints of concern (e.g., genetic toxicity, carcinogenicity, acute toxicity, reproductive toxicity, developmental toxicity) across several industries and regulatory bodies. Such novel in silico toxicology (IST) protocols, when fully developed and implemented, will ensure in silico toxicological assessments are performed and evaluated in a consistent, reproducible, and well-documented manner across industries and regulatory bodies to support wider uptake and acceptance of the approaches. The development of IST protocols is an initiative developed through a collaboration among an international consortium to reflect the state-of-the-art in in silico toxicology for hazard identification and characterization. A general outline for describing the development of such protocols is included and it is based on in silico predictions and/or available experimental data for a defined series of relevant toxicological effects or mechanisms. The publication presents a novel approach for determining the reliability of in silico predictions alongside experimental data. In addition, we discuss how to determine the level of confidence in the assessment based on the relevance and reliability of the information. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
kmer-SVM: a web server for identifying predictive regulatory sequence features in genomic data sets.
Fletez-Brant, Christopher; Lee, Dongwon; McCallion, Andrew S; Beer, Michael A
2013-07-01
Massively parallel sequencing technologies have made the generation of genomic data sets a routine component of many biological investigations. For example, Chromatin immunoprecipitation followed by sequence assays detect genomic regions bound (directly or indirectly) by specific factors, and DNase-seq identifies regions of open chromatin. A major bottleneck in the interpretation of these data is the identification of the underlying DNA sequence code that defines, and ultimately facilitates prediction of, these transcription factor (TF) bound or open chromatin regions. We have recently developed a novel computational methodology, which uses a support vector machine (SVM) with kmer sequence features (kmer-SVM) to identify predictive combinations of short transcription factor-binding sites, which determine the tissue specificity of these genomic assays (Lee, Karchin and Beer, Discriminative prediction of mammalian enhancers from DNA sequence. Genome Res. 2011; 21:2167-80). This regulatory information can (i) give confidence in genomic experiments by recovering previously known binding sites, and (ii) reveal novel sequence features for subsequent experimental testing of cooperative mechanisms. Here, we describe the development and implementation of a web server to allow the broader research community to independently apply our kmer-SVM to analyze and interpret their genomic datasets. We analyze five recently published data sets and demonstrate how this tool identifies accessory factors and repressive sequence elements. kmer-SVM is available at http://kmersvm.beerlab.org.
Snell, Kym I E; Hua, Harry; Debray, Thomas P A; Ensor, Joie; Look, Maxime P; Moons, Karel G M; Riley, Richard D
2016-01-01
Our aim was to improve meta-analysis methods for summarizing a prediction model's performance when individual participant data are available from multiple studies for external validation. We suggest multivariate meta-analysis for jointly synthesizing calibration and discrimination performance, while accounting for their correlation. The approach estimates a prediction model's average performance, the heterogeneity in performance across populations, and the probability of "good" performance in new populations. This allows different implementation strategies (e.g., recalibration) to be compared. Application is made to a diagnostic model for deep vein thrombosis (DVT) and a prognostic model for breast cancer mortality. In both examples, multivariate meta-analysis reveals that calibration performance is excellent on average but highly heterogeneous across populations unless the model's intercept (baseline hazard) is recalibrated. For the cancer model, the probability of "good" performance (defined by C statistic ≥0.7 and calibration slope between 0.9 and 1.1) in a new population was 0.67 with recalibration but 0.22 without recalibration. For the DVT model, even with recalibration, there was only a 0.03 probability of "good" performance. Multivariate meta-analysis can be used to externally validate a prediction model's calibration and discrimination performance across multiple populations and to evaluate different implementation strategies. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Genome-Scale Screening of Drug-Target Associations Relevant to Ki Using a Chemogenomics Approach
Cao, Dong-Sheng; Liang, Yi-Zeng; Deng, Zhe; Hu, Qian-Nan; He, Min; Xu, Qing-Song; Zhou, Guang-Hua; Zhang, Liu-Xia; Deng, Zi-xin; Liu, Shao
2013-01-01
The identification of interactions between drugs and target proteins plays a key role in genomic drug discovery. In the present study, the quantitative binding affinities of drug-target pairs are differentiated as a measurement to define whether a drug interacts with a protein or not, and then a chemogenomics framework using an unbiased set of general integrated features and random forest (RF) is employed to construct a predictive model which can accurately classify drug-target pairs. The predictability of the model is further investigated and validated by several independent validation sets. The built model is used to predict drug-target associations, some of which were confirmed by comparing experimental data from public biological resources. A drug-target interaction network with high confidence drug-target pairs was also reconstructed. This network provides further insight for the action of drugs and targets. Finally, a web-based server called PreDPI-Ki was developed to predict drug-target interactions for drug discovery. In addition to providing a high-confidence list of drug-target associations for subsequent experimental investigation guidance, these results also contribute to the understanding of drug-target interactions. We can also see that quantitative information of drug-target associations could greatly promote the development of more accurate models. The PreDPI-Ki server is freely available via: http://sdd.whu.edu.cn/dpiki. PMID:23577055
Epidemiologic research using probabilistic outcome definitions.
Cai, Bing; Hennessy, Sean; Lo Re, Vincent; Small, Dylan S
2015-01-01
Epidemiologic studies using electronic healthcare data often define the presence or absence of binary clinical outcomes by using algorithms with imperfect specificity, sensitivity, and positive predictive value. This results in misclassification and bias in study results. We describe and evaluate a new method called probabilistic outcome definition (POD) that uses logistic regression to estimate the probability of a clinical outcome using multiple potential algorithms and then uses multiple imputation to make valid inferences about the risk ratio or other epidemiologic parameters of interest. We conducted a simulation to evaluate the performance of the POD method with two variables that can predict the true outcome and compared the POD method with the conventional method. The simulation results showed that when the true risk ratio is equal to 1.0 (null), the conventional method based on a binary outcome provides unbiased estimates. However, when the risk ratio is not equal to 1.0, the traditional method, either using one predictive variable or both predictive variables to define the outcome, is biased when the positive predictive value is <100%, and the bias is very severe when the sensitivity or positive predictive value is poor (less than 0.75 in our simulation). In contrast, the POD method provides unbiased estimates of the risk ratio both when this measure of effect is equal to 1.0 and not equal to 1.0. Even when the sensitivity and positive predictive value are low, the POD method continues to provide unbiased estimates of the risk ratio. The POD method provides an improved way to define outcomes in database research. This method has a major advantage over the conventional method in that it provided unbiased estimates of risk ratios and it is easy to use. Copyright © 2014 John Wiley & Sons, Ltd.
Yang, Qinglin; Su, Yingying; Hussain, Mohammed; Chen, Weibi; Ye, Hong; Gao, Daiquan; Tian, Fei
2014-05-01
Burst suppression ratio (BSR) is a quantitative electroencephalography (qEEG) parameter. The purpose of our study was to compare the accuracy of BSR when compared to other EEG parameters in predicting poor outcomes in adults who sustained post-anoxic coma while not being subjected to therapeutic hypothermia. EEG was registered and recorded at least once within 7 days of post-anoxic coma onset. Electrodes were placed according to the international 10-20 system, using a 16-channel layout. Each EEG expert scored raw EEG using a grading scale adapted from Young and scored amplitude-integrated electroencephalography tracings, in addition to obtaining qEEG parameters defined as BSR with a defined threshold. Glasgow outcome scales of 1 and 2 at 3 months, determined by two blinded neurologists, were defined as poor outcome. Sixty patients with Glasgow coma scale score of 8 or less after anoxic accident were included. The sensitivity (97.1%), specificity (73.3%), positive predictive value (82.5%), and negative prediction value (95.0%) of BSR in predicting poor outcome were higher than other EEG variables. BSR1 and BSR2 were reliable in predicting death (area under the curve > 0.8, P < 0.05), with the respective cutoff points being 39.8% and 61.6%. BSR1 was reliable in predicting poor outcome (area under the curve = 0.820, P < 0.05) with a cutoff point of 23.9%. BSR1 was also an independent predictor of increased risk of death (odds ratio = 1.042, 95% confidence intervals: 1.012-1.073, P = 0.006). BSR may be a better predictor in prognosticating poor outcomes in patients with post-anoxic coma who do not undergo therapeutic hypothermia when compared to other qEEG parameters.
NASA Astrophysics Data System (ADS)
Tesoriero, A. J.; Terziotti, S.
2014-12-01
Nitrate trends in streams often do not match expectations based on recent nitrogen source loadings to the land surface. Groundwater discharge with long travel times has been suggested as the likely cause for these observations. The fate of nitrate in groundwater depends to a large extent on the occurrence of denitrification along flow paths. Because denitrification in groundwater is inhibited when dissolved oxygen (DO) concentrations are high, defining the oxic-suboxic interface has been critical in determining pathways for nitrate transport in groundwater and to streams at the local scale. Predicting redox conditions on a regional scale is complicated by the spatial variability of reaction rates. In this study, logistic regression and boosted classification tree analysis were used to predict the probability of oxic water in groundwater in the Chesapeake Bay watershed. The probability of oxic water (DO > 2 mg/L) was predicted by relating DO concentrations in over 3,000 groundwater samples to indicators of residence time and/or electron donor availability. Variables that describe position in the flow system (e.g., depth to top of the open interval), soil drainage and surficial geology were the most important predictors of oxic water. Logistic regression and boosted classification tree analysis correctly predicted the presence or absence of oxic conditions in over 75 % of the samples in both training and validation data sets. Predictions of the percentages of oxic wells in deciles of risk were very accurate (r2>0.9) in both the training and validation data sets. Depth to the bottom of the oxic layer was predicted and is being used to estimate the effect that groundwater denitrification has on stream nitrate concentrations and the time lag between the application of nitrogen at the land surface and its effect on streams.
Predicting Intracerebral Hemorrhage Growth With the Spot Sign: The Effect of Onset-to-Scan Time.
Dowlatshahi, Dar; Brouwers, H Bart; Demchuk, Andrew M; Hill, Michael D; Aviv, Richard I; Ufholz, Lee-Anne; Reaume, Michael; Wintermark, Max; Hemphill, J Claude; Murai, Yasuo; Wang, Yongjun; Zhao, Xingquan; Wang, Yilong; Li, Na; Sorimachi, Takatoshi; Matsumae, Mitsunori; Steiner, Thorsten; Rizos, Timolaos; Greenberg, Steven M; Romero, Javier M; Rosand, Jonathan; Goldstein, Joshua N; Sharma, Mukul
2016-03-01
Hematoma expansion after acute intracerebral hemorrhage is common and is associated with early deterioration and poor clinical outcome. The computed tomographic angiography (CTA) spot sign is a promising predictor of expansion; however, frequency and predictive values are variable across studies, possibly because of differences in onset-to-CTA time. We performed a patient-level meta-analysis to define the relationship between onset-to-CTA time and frequency and predictive ability of the spot sign. We completed a systematic review for studies of CTA spot sign and hematoma expansion. We subsequently pooled patient-level data on the frequency and predictive values for significant hematoma expansion according to 5 predefined categorized onset-to-CTA times. We calculated spot-sign frequency both as raw and frequency-adjusted rates. Among 2051 studies identified, 12 met our inclusion criteria. Baseline hematoma volume, spot-sign status, and time-to-CTA were available for 1176 patients, and 1039 patients had follow-up computed tomographies for hematoma expansion analysis. The overall spot sign frequency was 26%, decreasing from 39% within 2 hours of onset to 13% beyond 8 hours (P<0.001). There was a significant decrease in hematoma expansion in spot-positive patients as onset-to-CTA time increased (P=0.004), with positive predictive values decreasing from 53% to 33%. The frequency of the CTA spot sign is inversely related to intracerebral hemorrhage onset-to-CTA time. Furthermore, the positive predictive value of the spot sign for significant hematoma expansion decreases as time-to-CTA increases. Our results offer more precise risk stratification for patients with acute intracerebral hemorrhage and will help refine clinical prediction rules for intracerebral hemorrhage expansion. © 2016 American Heart Association, Inc.
Potential Predictability of the Monsoon Subclimate Systems
NASA Technical Reports Server (NTRS)
Yang, Song; Lau, K.-M.; Chang, Y.; Schubert, S.
1999-01-01
While El Nino/Southern Oscillation (ENSO) phenomenon can be predicted with some success using coupled oceanic-atmospheric models, the skill of predicting the tropical monsoons is low regardless of the methods applied. The low skill of monsoon prediction may be either because the monsoons are not defined appropriately or because they are not influenced significantly by boundary forcing. The latter characterizes the importance of internal dynamics in monsoon variability and leads to many eminent chaotic features of the monsoons. In this study, we analyze results from nine AMIP-type ensemble experiments with the NASA/GEOS-2 general circulation model to assess the potential predictability of the tropical climate system. We will focus on the variability and predictability of tropical monsoon rainfall on seasonal-to-interannual time scales. It is known that the tropical climate is more predictable than its extratropical counterpart. However, predictability is different from one climate subsystem to another within the tropics. It is important to understand the differences among these subsystems in order to increase our skill of seasonal-to-interannual prediction. We assess potential predictability by comparing the magnitude of internal and forced variances as defined by Harzallah and Sadourny (1995). The internal variance measures the spread among the various ensemble members. The forced part of rainfall variance is determined by the magnitude of the ensemble mean rainfall anomaly and by the degree of consistency of the results from the various experiments.
Russell, Charlotte; Wearden, Alison J; Fairclough, Gillian; Emsley, Richard A; Kyle, Simon D
2016-04-01
This study aimed to (1) examine the relationship between subjective and actigraphy-defined sleep, and next-day fatigue in chronic fatigue syndrome (CFS); and (2) investigate the potential mediating role of negative mood on this relationship. We also sought to examine the effect of presleep arousal on perceptions of sleep. Twenty-seven adults meeting the Oxford criteria for CFS and self-identifying as experiencing sleep difficulties were recruited to take part in a prospective daily diary study, enabling symptom capture in real time over a 6-day period. A paper diary was used to record nightly subjective sleep and presleep arousal. Mood and fatigue symptoms were rated four times each day. Actigraphy was employed to provide objective estimations of sleep duration and continuity. Multilevel modelling revealed that subjective sleep variables, namely sleep quality, efficiency, and perceiving sleep to be unrefreshing, predicted following-day fatigue levels, with poorer subjective sleep related to increased fatigue. Lower subjective sleep efficiency and perceiving sleep as unrefreshing predicted reduced variance in fatigue across the following day. Negative mood on waking partially mediated these relationships. Increased presleep cognitive and somatic arousal predicted self-reported poor sleep. Actigraphy-defined sleep, however, was not found to predict following-day fatigue. For the first time we show that nightly subjective sleep predicts next-day fatigue in CFS and identify important factors driving this relationship. Our data suggest that sleep specific interventions, targeting presleep arousal, perceptions of sleep and negative mood on waking, may improve fatigue in CFS. © 2016 Associated Professional Sleep Societies, LLC.
Deter, Russell L.; Lee, Wesley; Yeo, Lami; Romero, Roberto
2012-01-01
Objectives To characterize 2nd and 3rd trimester fetal growth using Individualized Growth Assessment in a large cohort of fetuses with normal growth outcomes. Methods A prospective longitudinal study of 119 pregnancies was carried out from 18 weeks, MA, to delivery. Measurements of eleven fetal growth parameters were obtained from 3D scans at 3–4 week intervals. Regression analyses were used to determine Start Points [SP] and Rossavik model [P = c (t) k + st] coefficients c, k and s for each parameter in each fetus. Second trimester growth model specification functions were re-established. These functions were used to generate individual growth models and determine predicted s and s-residual [s = pred s + s-resid] values. Actual measurements were compared to predicted growth trajectories obtained from the growth models and Percent Deviations [% Dev = {{actual − predicted}/predicted} × 100] calculated. Age-specific reference standards for this statistic were defined using 2-level statistical modeling for the nine directly measured parameters and estimated weight. Results Rossavik models fit the data for all parameters very well [R2: 99%], with SP’s and k values similar to those found in a much smaller cohort. The c values were strongly related to the 2nd trimester slope [R2: 97%] as was predicted s to estimated c [R2: 95%]. The latter was negative for skeletal parameters and positive for soft tissue parameters. The s-residuals were unrelated to estimated c’s [R2: 0%], and had mean values of zero. Rossavik models predicted 3rd trimester growth with systematic errors close to 0% and random errors [95% range] of 5.7 – 10.9% and 20.0 – 24.3% for one and three dimensional parameters, respectively. Moderate changes in age-specific variability were seen in the 3rd trimester.. Conclusions IGA procedures for evaluating 2nd and 3rd trimester growth are now established based on a large cohort [4–6 fold larger than those used previously], thus permitting more reliable growth assessment with each fetus acting as its own control. New, more rigorously defined, age-specific standards for the evaluation of 3rd trimester growth deviations are now available for 10 anatomical parameters. Our results are also consistent with the predicted s and s-residual being representatives of growth controllers operating through the insulin-like growth factor [IGF] axis. PMID:23962305
Human centromere genomics: now it's personal.
Hayden, Karen E
2012-07-01
Advances in human genomics have accelerated studies in evolution, disease, and cellular regulation. However, centromere sequences, defining the chromosomal interface with spindle microtubules, remain largely absent from ongoing genomic studies and disconnected from functional, genome-wide analyses. This disparity results from the challenge of predicting the linear order of multi-megabase-sized regions that are composed almost entirely of near-identical satellite DNA. Acknowledging these challenges, the field of human centromere genomics possesses the potential to rapidly advance given the availability of individual, or personalized, genome projects matched with the promise of long-read sequencing technologies. Here I review the current genomic model of human centromeres in consideration of those studies involving functional datasets that examine the role of sequence in centromere identity.
Intellectual Disability, Mild Cognitive Impairment, and Risk for Dementia
Silverman, Wayne P.; Zigman, Warren B.; Krinsky-McHale, Sharon J.; Ryan, Robert; Schupf, Nicole
2013-01-01
People with intellectual disability (ID) are living longer than ever before, raising concerns about old-age associated disorders. Dementia is among the most serious of these disorders, and theories relating cognitive reserve to risk predict that older adults with ID should be particularly vulnerable. Previous estimates of relative risk for dementia associated with ID have been inconsistent, and the present analyses examined the possible influence of variation in diagnostic criteria on findings. As expected, relaxation in the stringency of case definition for adults with ID increased relative risk, underscoring the importance of developing valid criteria for defining mild cognitive impairment, early dementia, and distinguishing between the two in adults with ID. Once available, these standards will contribute to more effective evidence-based planning. PMID:24273589
Old stellar populations. 5: Absorption feature indices for the complete LICK/IDS sample of stars
NASA Technical Reports Server (NTRS)
Worthey, Guy; Faber, S. M.; Gonzalez, J. Jesus; Burstein, D.
1994-01-01
Twenty-one optical absorption features, 11 of which have been previously defined, are automatically measured in a sample of 460 stars. Following Gorgas et al., the indices are summarized in fitting functions that give index strengths as functions of stellar temperature, gravity, and (Fe/H). This project was carried out with the purpose of predicting index strengths in the integrated light of stellar populations of different ages and metallicities, but the data should be valuable for stellar studies in the Galaxy as well. Several of the new indices appear to be promising indicators of metallicity for old stellar populations. A complete list of index data and atmospheric parameters is available in computer-readable form.
Neurobehavioral Development of Common Marmoset Monkeys
Schultz-Darken, Nancy; Braun, Katarina M.; Emborg, Marina E.
2016-01-01
Common marmoset (Callithrix jacchus) monkeys are a resource for biomedical research and their use is predicted to increase due to the suitability of this species for transgenic approaches. Identification of abnormal neurodevelopment due to genetic modification relies upon the comparison with validated patterns of normal behavior defined by unbiased methods. As scientists unfamiliar with nonhuman primate development are interested to apply genomic editing techniques in marmosets, it would be beneficial to the field that the investigators use validated methods of postnatal evaluation that are age and species appropriate. This review aims to analyze current available data on marmoset physical and behavioral postnatal development, describe the methods used and discuss next steps to better understand and evaluate marmoset normal and abnormal postnatal neurodevelopment PMID:26502294
Im, Hyung-Jun; Bradshaw, Tyler; Solaiyappan, Meiyappan; Cho, Steve Y
2018-02-01
Numerous methods to segment tumors using 18 F-fluorodeoxyglucose positron emission tomography (FDG PET) have been introduced. Metabolic tumor volume (MTV) refers to the metabolically active volume of the tumor segmented using FDG PET, and has been shown to be useful in predicting patient outcome and in assessing treatment response. Also, tumor segmentation using FDG PET has useful applications in radiotherapy treatment planning. Despite extensive research on MTV showing promising results, MTV is not used in standard clinical practice yet, mainly because there is no consensus on the optimal method to segment tumors in FDG PET images. In this review, we discuss currently available methods to measure MTV using FDG PET, and assess the advantages and disadvantages of the methods.
Ajay, Dara; Gangwal, Rahul P; Sangamwar, Abhay T
2015-01-01
Intelligent Patent Analysis Tool (IPAT) is an online data retrieval tool, operated based on text mining algorithm to extract specific patent information in a predetermined pattern into an Excel sheet. The software is designed and developed to retrieve and analyze technology information from multiple patent documents and generate various patent landscape graphs and charts. The software is C# coded in visual studio 2010, which extracts the publicly available patent information from the web pages like Google Patent and simultaneously study the various technology trends based on user-defined parameters. In other words, IPAT combined with the manual categorization will act as an excellent technology assessment tool in competitive intelligence and due diligence for predicting the future R&D forecast.
Molecular mechanisms involved in convergent crop domestication.
Lenser, Teresa; Theißen, Günter
2013-12-01
Domestication has helped to understand evolution. We argue that, vice versa, novel insights into evolutionary principles could provide deeper insights into domestication. Molecular analyses have demonstrated that convergent phenotypic evolution is often based on molecular changes in orthologous genes or pathways. Recent studies have revealed that during plant domestication the causal mutations for convergent changes in key traits are likely to be located in particular genes. These insights may contribute to defining candidate genes for genetic improvement during the domestication of new plant species. Such efforts may help to increase the range of arable crops available, thus increasing crop biodiversity and food security to help meet the predicted demands of the continually growing global population under rapidly changing environmental conditions. Copyright © 2013 Elsevier Ltd. All rights reserved.
Preliminary Sizing and Performance Evaluation of Supersonic Cruise Aircraft
NASA Technical Reports Server (NTRS)
Fetterman, D. E., Jr.
1976-01-01
The basic processes of a method that performs sizing operations on a baseline aircraft and determines their subsequent effects on aerodynamics, propulsion, weights, and mission performance are described. The input requirements of the associated computer program are defined and its output listings explained. Results obtained by applying the method to an advanced supersonic technology concept are discussed. These results include the effects of wing loading, thrust-to-weight ratio, and technology improvements on range performance, and possible gains in both range and payload capability that become available through growth versions of the baseline aircraft. The results from an in depth contractual study that confirm the range gain predicted for a particular wing loading, thrust-to-weight ratio combination are also included.
DOT National Transportation Integrated Search
2012-05-01
The purpose of this document is to fully define and describe the logic flow and mathematical equations for a predictive braking enforcement algorithm intended for implementation in a Positive Train Control (PTC) system.
Modelling invasion for a habitat generalist and a specialist plant species
Evangelista, P.H.; Kumar, S.; Stohlgren, T.J.; Jarnevich, C.S.; Crall, A.W.; Norman, J. B.; Barnett, D.T.
2008-01-01
Predicting suitable habitat and the potential distribution of invasive species is a high priority for resource managers and systems ecologists. Most models are designed to identify habitat characteristics that define the ecological niche of a species with little consideration to individual species' traits. We tested five commonly used modelling methods on two invasive plant species, the habitat generalist Bromus tectorum and habitat specialist Tamarix chinensis, to compare model performances, evaluate predictability, and relate results to distribution traits associated with each species. Most of the tested models performed similarly for each species; however, the generalist species proved to be more difficult to predict than the specialist species. The highest area under the receiver-operating characteristic curve values with independent validation data sets of B. tectorum and T. chinensis was 0.503 and 0.885, respectively. Similarly, a confusion matrix for B. tectorum had the highest overall accuracy of 55%, while the overall accuracy for T. chinensis was 85%. Models for the generalist species had varying performances, poor evaluations, and inconsistent results. This may be a result of a generalist's capability to persist in a wide range of environmental conditions that are not easily defined by the data, independent variables or model design. Models for the specialist species had consistently strong performances, high evaluations, and similar results among different model applications. This is likely a consequence of the specialist's requirement for explicit environmental resources and ecological barriers that are easily defined by predictive models. Although defining new invaders as generalist or specialist species can be challenging, model performances and evaluations may provide valuable information on a species' potential invasiveness.
NEW CATEGORICAL METRICS FOR AIR QUALITY MODEL EVALUATION
Traditional categorical metrics used in model evaluations are "clear-cut" measures in that the model's ability to predict an exceedance is defined by a fixed threshold concentration and the metrics are defined by observation-forecast sets that are paired both in space and time. T...
We used a spatially explicit population model of wolves (Canis lupus) to propose a framework for defining rangewide recovery priorities and finer-scale strategies for regional reintroductions. The model predicts that Yellowstone and central Idaho, where wolves have recently been ...
Predictive factors of the nursing diagnosis sedentary lifestyle in people with high blood pressure.
Guedes, Nirla Gomes; Lopes, Marcos Venícios de Oliveira; Araujo, Thelma Leite de; Moreira, Rafaella Pessoa; Martins, Larissa Castelo Guedes
2011-01-01
To verify the reproducibility of defining the characteristics and related factors in order to identify a sedentary lifestyle in patients with high blood pressure. A cross-sectional study. 310 patients diagnosed with high blood pressure. Socio-demographics and variables related to defining the characteristics and related factors of a sedentary lifestyle. The coefficient Kappa was utilized to analyze the reproducibility. The sensitivity, specificity, and predictive value of the defining characteristics were also analyzed. Logistic regression was applied in the analysis of possible predictors. The defining characteristic with the greatest sensitivity was demonstrates physical deconditioning (98.92%). The characteristics chooses a daily routine lacking physical exercise and verbalizes preference for activities low in physical activity presented higher values of specificity (99.21% and 95.97%, respectively). The following indicators were identified as powerful predictors (85.2%) for the identification of a sedentary lifestyle: demonstrates physical deconditioning, verbalizes preference for activities low in physical activity, and lack of training for accomplishment of physical exercise. © 2010 Wiley Periodicals, Inc.
Predicting the Initial Lapse Using a Mobile Health Application after Alcohol Detoxification
ERIC Educational Resources Information Center
Chih, Ming-Yuan
2013-01-01
The prediction and prevention of the initial lapse--which is defined as the first lapse after a period of abstinence--is important because the initial lapse often leads to subsequent lapses (within the same lapse episode) or relapse. The prediction of the initial lapse may allow preemptive intervention to be possible. This dissertation reports on…
ERIC Educational Resources Information Center
Savage, Robert; Kozakewich, Meagan; Genesee, Fred; Erdos, Caroline; Haigh, Corinne
2017-01-01
This study examined whether decoding and linguistic comprehension abilities, broadly defined by the Simple View of Reading, in grade 1 each uniquely predicted the grade 6 writing performance of English-speaking children (n = 76) who were educated bilingually in both English their first language and French, a second language. Prediction was made…
2014-06-20
concentrated on SACCON. The planform and section profiles were defined in cooperation between DLR and EADS -MAS during the early stages of AVT-161. DLR...however most predictions were made as first-order temporal predictions. Given the highly unsteady flow fields observed by the experiments, unsteady
Web application for automatic prediction of gene translation elongation efficiency.
Sokolov, Vladimir; Zuraev, Bulat; Lashin, Sergei; Matushkin, Yury
2015-09-03
Expression efficiency is one of the major characteristics describing genes in various modern investigations. Expression efficiency of genes is regulated at various stages: transcription, translation, posttranslational protein modification and others. In this study, a special EloE (Elongation Efficiency) web application is described. The EloE sorts the organism's genes in a descend order on their theoretical rate of the elongation stage of translation based on the analysis of their nucleotide sequences. Obtained theoretical data have a significant correlation with available experimental data of gene expression in various organisms. In addition, the program identifies preferential codons in organism's genes and defines distribution of potential secondary structures energy in 5´ and 3´ regions of mRNA. The EloE can be useful in preliminary estimation of translation elongation efficiency for genes for which experimental data are not available yet. Some results can be used, for instance, in other programs modeling artificial genetic structures in genetically engineered experiments.
Bidirectional Anticipation of Future Osmotic Challenges by Vasopressin Neurons.
Mandelblat-Cerf, Yael; Kim, Angela; Burgess, Christian R; Subramanian, Siva; Tannous, Bakhos A; Lowell, Bradford B; Andermann, Mark L
2017-01-04
Ingestion of water and food are major hypo- and hyperosmotic challenges. To protect the body from osmotic stress, posterior pituitary-projecting, vasopressin-secreting neurons (VP pp neurons) counter osmotic perturbations by altering their release of vasopressin, which controls renal water excretion. Vasopressin levels begin to fall within minutes of water consumption, even prior to changes in blood osmolality. To ascertain the precise temporal dynamics by which water or food ingestion affect VP pp neuron activity, we directly recorded the spiking and calcium activity of genetically defined VP pp neurons. In states of elevated osmolality, water availability rapidly decreased VP pp neuron activity within seconds, beginning prior to water ingestion, upon presentation of water-predicting cues. In contrast, food availability following food restriction rapidly increased VP pp neuron activity within seconds, but only following feeding onset. These rapid and distinct changes in activity during drinking and feeding suggest diverse neural mechanisms underlying anticipatory regulation of VP pp neurons. Published by Elsevier Inc.
MetaPathways v2.5: quantitative functional, taxonomic and usability improvements.
Konwar, Kishori M; Hanson, Niels W; Bhatia, Maya P; Kim, Dongjae; Wu, Shang-Ju; Hahn, Aria S; Morgan-Lang, Connor; Cheung, Hiu Kan; Hallam, Steven J
2015-10-15
Next-generation sequencing is producing vast amounts of sequence information from natural and engineered ecosystems. Although this data deluge has an enormous potential to transform our lives, knowledge creation and translation need software applications that scale with increasing data processing and analysis requirements. Here, we present improvements to MetaPathways, an annotation and analysis pipeline for environmental sequence information that expedites this transformation. We specifically address pathway prediction hazards through integration of a weighted taxonomic distance and enable quantitative comparison of assembled annotations through a normalized read-mapping measure. Additionally, we improve LAST homology searches through BLAST-equivalent E-values and output formats that are natively compatible with prevailing software applications. Finally, an updated graphical user interface allows for keyword annotation query and projection onto user-defined functional gene hierarchies, including the Carbohydrate-Active Enzyme database. MetaPathways v2.5 is available on GitHub: http://github.com/hallamlab/metapathways2. shallam@mail.ubc.ca Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Ratcliffe, Elizabeth; Hourd, Paul; Guijarro-Leach, Juan; Rayment, Erin; Williams, David J; Thomas, Robert J
2013-01-01
Commercial regenerative medicine will require large quantities of clinical-specification human cells. The cost and quality of manufacture is notoriously difficult to control due to highly complex processes with poorly defined tolerances. As a step to overcome this, we aimed to demonstrate the use of 'quality-by-design' tools to define the operating space for economic passage of a scalable human embryonic stem cell production method with minimal cell loss. Design of experiments response surface methodology was applied to generate empirical models to predict optimal operating conditions for a unit of manufacture of a previously developed automatable and scalable human embryonic stem cell production method. Two models were defined to predict cell yield and cell recovery rate postpassage, in terms of the predictor variables of media volume, cell seeding density, media exchange and length of passage. Predicted operating conditions for maximized productivity were successfully validated. Such 'quality-by-design' type approaches to process design and optimization will be essential to reduce the risk of product failure and patient harm, and to build regulatory confidence in cell therapy manufacturing processes.
Variation of surface ozone in Campo Grande, Brazil: meteorological effect analysis and prediction.
Pires, J C M; Souza, A; Pavão, H G; Martins, F G
2014-09-01
The effect of meteorological variables on surface ozone (O3) concentrations was analysed based on temporal variation of linear correlation and artificial neural network (ANN) models defined by genetic algorithms (GAs). ANN models were also used to predict the daily average concentration of this air pollutant in Campo Grande, Brazil. Three methodologies were applied using GAs, two of them considering threshold models. In these models, the variables selected to define different regimes were daily average O3 concentration, relative humidity and solar radiation. The threshold model that considers two O3 regimes was the one that correctly describes the effect of important meteorological variables in O3 behaviour, presenting also a good predictive performance. Solar radiation, relative humidity and rainfall were considered significant for both O3 regimes; however, wind speed (dispersion effect) was only significant for high concentrations. According to this model, high O3 concentrations corresponded to high solar radiation, low relative humidity and wind speed. This model showed to be a powerful tool to interpret the O3 behaviour, being useful to define policy strategies for human health protection regarding air pollution.
In quest of a systematic framework for unifying and defining nanoscience
2009-01-01
This article proposes a systematic framework for unifying and defining nanoscience based on historic first principles and step logic that led to a “central paradigm” (i.e., unifying framework) for traditional elemental/small-molecule chemistry. As such, a Nanomaterials classification roadmap is proposed, which divides all nanomatter into Category I: discrete, well-defined and Category II: statistical, undefined nanoparticles. We consider only Category I, well-defined nanoparticles which are >90% monodisperse as a function of Critical Nanoscale Design Parameters (CNDPs) defined according to: (a) size, (b) shape, (c) surface chemistry, (d) flexibility, and (e) elemental composition. Classified as either hard (H) (i.e., inorganic-based) or soft (S) (i.e., organic-based) categories, these nanoparticles were found to manifest pervasive atom mimicry features that included: (1) a dominance of zero-dimensional (0D) core–shell nanoarchitectures, (2) the ability to self-assemble or chemically bond as discrete, quantized nanounits, and (3) exhibited well-defined nanoscale valencies and stoichiometries reminiscent of atom-based elements. These discrete nanoparticle categories are referred to as hard or soft particle nanoelements. Many examples describing chemical bonding/assembly of these nanoelements have been reported in the literature. We refer to these hard:hard (H-n:H-n), soft:soft (S-n:S-n), or hard:soft (H-n:S-n) nanoelement combinations as nanocompounds. Due to their quantized features, many nanoelement and nanocompound categories are reported to exhibit well-defined nanoperiodic property patterns. These periodic property patterns are dependent on their quantized nanofeatures (CNDPs) and dramatically influence intrinsic physicochemical properties (i.e., melting points, reactivity/self-assembly, sterics, and nanoencapsulation), as well as important functional/performance properties (i.e., magnetic, photonic, electronic, and toxicologic properties). We propose this perspective as a modest first step toward more clearly defining synthetic nanochemistry as well as providing a systematic framework for unifying nanoscience. With further progress, one should anticipate the evolution of future nanoperiodic table(s) suitable for predicting important risk/benefit boundaries in the field of nanoscience. Electronic supplementary material The online version of this article (doi:10.1007/s11051-009-9632-z) contains supplementary material, which is available to authorized users. PMID:21170133
Prat, Aleix; Cheang, Maggie Chon U.; Martín, Miguel; Parker, Joel S.; Carrasco, Eva; Caballero, Rosalía; Tyldesley, Scott; Gelmon, Karen; Bernard, Philip S.; Nielsen, Torsten O.; Perou, Charles M.
2013-01-01
Purpose Current immunohistochemical (IHC)-based definitions of luminal A and B breast cancers are imperfect when compared with multigene expression-based assays. In this study, we sought to improve the IHC subtyping by examining the pathologic and gene expression characteristics of genomically defined luminal A and B subtypes. Patients and Methods Gene expression and pathologic features were collected from primary tumors across five independent cohorts: British Columbia Cancer Agency (BCCA) tamoxifen-treated only, Grupo Español de Investigación en Cáncer de Mama 9906 trial, BCCA no systemic treatment cohort, PAM50 microarray training data set, and a combined publicly available microarray data set. Optimal cutoffs of percentage of progesterone receptor (PR) –positive tumor cells to predict survival were derived and independently tested. Multivariable Cox models were used to test the prognostic significance. Results Clinicopathologic comparisons among luminal A and B subtypes consistently identified higher rates of PR positivity, human epidermal growth factor receptor 2 (HER2) negativity, and histologic grade 1 in luminal A tumors. Quantitative PR gene and protein expression were also found to be significantly higher in luminal A tumors. An empiric cutoff of more than 20% of PR-positive tumor cells was statistically chosen and proved significant for predicting survival differences within IHC-defined luminal A tumors independently of endocrine therapy administration. Finally, no additional prognostic value within hormonal receptor (HR) –positive/HER2-negative disease was observed with the use of the IHC4 score when intrinsic IHC-based subtypes were used that included the more than 20% PR-positive tumor cells and vice versa. Conclusion Semiquantitative IHC expression of PR adds prognostic value within the current IHC-based luminal A definition by improving the identification of good outcome breast cancers. The new proposed IHC-based definition of luminal A tumors is HR positive/HER2 negative/Ki-67 less than 14%, and PR more than 20%. PMID:23233704
Scale-dependent habitat use by a large free-ranging predator, the Mediterranean fin whale
NASA Astrophysics Data System (ADS)
Cotté, Cédric; Guinet, Christophe; Taupier-Letage, Isabelle; Mate, Bruce; Petiau, Estelle
2009-05-01
Since the heterogeneity of oceanographic conditions drives abundance, distribution, and availability of prey, it is essential to understand how foraging predators interact with their dynamic environment at various spatial and temporal scales. We examined the spatio-temporal relationships between oceanographic features and abundance of fin whales ( Balaenoptera physalus), the largest free-ranging predator in the Western Mediterranean Sea (WM), through two independent approaches. First, spatial modeling was used to estimate whale density, using waiting distance (the distance between detections) for fin whales along ferry routes across the WM, in relation to remotely sensed oceanographic parameters. At a large scale (basin and year), fin whales exhibited fidelity to the northern WM with a summer-aggregated and winter-dispersed pattern. At mesoscale (20-100 km), whales were found in colder, saltier (from an on-board system) and dynamic areas defined by steep altimetric and temperature gradients. Second, using an independent fin whale satellite tracking dataset, we showed that tracked whales were effectively preferentially located in favorable habitats, i.e. in areas of high predicted densities as identified by our previous model using oceanographic data contemporaneous to the tracking period. We suggest that the large-scale fidelity corresponds to temporally and spatially predictable habitat of whale favorite prey, the northern krill ( Meganyctiphanes norvegica), while mesoscale relationships are likely to identify areas of high prey concentration and availability.
Combinatorial therapy discovery using mixed integer linear programming.
Pang, Kaifang; Wan, Ying-Wooi; Choi, William T; Donehower, Lawrence A; Sun, Jingchun; Pant, Dhruv; Liu, Zhandong
2014-05-15
Combinatorial therapies play increasingly important roles in combating complex diseases. Owing to the huge cost associated with experimental methods in identifying optimal drug combinations, computational approaches can provide a guide to limit the search space and reduce cost. However, few computational approaches have been developed for this purpose, and thus there is a great need of new algorithms for drug combination prediction. Here we proposed to formulate the optimal combinatorial therapy problem into two complementary mathematical algorithms, Balanced Target Set Cover (BTSC) and Minimum Off-Target Set Cover (MOTSC). Given a disease gene set, BTSC seeks a balanced solution that maximizes the coverage on the disease genes and minimizes the off-target hits at the same time. MOTSC seeks a full coverage on the disease gene set while minimizing the off-target set. Through simulation, both BTSC and MOTSC demonstrated a much faster running time over exhaustive search with the same accuracy. When applied to real disease gene sets, our algorithms not only identified known drug combinations, but also predicted novel drug combinations that are worth further testing. In addition, we developed a web-based tool to allow users to iteratively search for optimal drug combinations given a user-defined gene set. Our tool is freely available for noncommercial use at http://www.drug.liuzlab.org/. zhandong.liu@bcm.edu Supplementary data are available at Bioinformatics online.
MIRNA-DISTILLER: A Stand-Alone Application to Compile microRNA Data from Databases.
Rieger, Jessica K; Bodan, Denis A; Zanger, Ulrich M
2011-01-01
MicroRNAs (miRNA) are small non-coding RNA molecules of ∼22 nucleotides which regulate large numbers of genes by binding to seed sequences at the 3'-untranslated region of target gene transcripts. The target mRNA is then usually degraded or translation is inhibited, although thus resulting in posttranscriptional down regulation of gene expression at the mRNA and/or protein level. Due to the bioinformatic difficulties in predicting functional miRNA binding sites, several publically available databases have been developed that predict miRNA binding sites based on different algorithms. The parallel use of different databases is currently indispensable, but highly uncomfortable and time consuming, especially when working with numerous genes of interest. We have therefore developed a new stand-alone program, termed MIRNA-DISTILLER, which allows to compile miRNA data for given target genes from public databases. Currently implemented are TargetScan, microCosm, and miRDB, which may be queried independently, pairwise, or together to calculate the respective intersections. Data are stored locally for application of further analysis tools including freely definable biological parameter filters, customized output-lists for both miRNAs and target genes, and various graphical facilities. The software, a data example file and a tutorial are freely available at http://www.ikp-stuttgart.de/content/language1/html/10415.asp.
MIRNA-DISTILLER: A Stand-Alone Application to Compile microRNA Data from Databases
Rieger, Jessica K.; Bodan, Denis A.; Zanger, Ulrich M.
2011-01-01
MicroRNAs (miRNA) are small non-coding RNA molecules of ∼22 nucleotides which regulate large numbers of genes by binding to seed sequences at the 3′-untranslated region of target gene transcripts. The target mRNA is then usually degraded or translation is inhibited, although thus resulting in posttranscriptional down regulation of gene expression at the mRNA and/or protein level. Due to the bioinformatic difficulties in predicting functional miRNA binding sites, several publically available databases have been developed that predict miRNA binding sites based on different algorithms. The parallel use of different databases is currently indispensable, but highly uncomfortable and time consuming, especially when working with numerous genes of interest. We have therefore developed a new stand-alone program, termed MIRNA-DISTILLER, which allows to compile miRNA data for given target genes from public databases. Currently implemented are TargetScan, microCosm, and miRDB, which may be queried independently, pairwise, or together to calculate the respective intersections. Data are stored locally for application of further analysis tools including freely definable biological parameter filters, customized output-lists for both miRNAs and target genes, and various graphical facilities. The software, a data example file and a tutorial are freely available at http://www.ikp-stuttgart.de/content/language1/html/10415.asp PMID:22303335
Influence of the pressure dependent coefficient of friction on deep drawing springback predictions
NASA Astrophysics Data System (ADS)
Gil, Imanol; Galdos, Lander; Mendiguren, Joseba; Mugarra, Endika; Sáenz de Argandoña, Eneko
2016-10-01
This research studies the effect of considering an advanced variable friction coefficient on the springback prediction of stamping processes. Traditional constant coefficient of friction considerations are being replaced by more advanced friction coefficient definitions. The aim of this work is to show the influence of defining a pressure dependent friction coefficient on numerical springback predictions of a DX54D mild steel, a HSLA380 and a DP780 high strength steel. The pressure dependent friction model of each material was fitted to the experimental data obtained by Strip Drawing tests. Then, these friction models were implemented in a numerical simulation of a drawing process of an industrial automotive part. The results showed important differences between defining a pressure dependent friction coefficient or a constant friction coefficient.
Population Dynamics of Early Human Migration in Britain
Vahia, Mayank N.; Ladiwala, Uma; Mahathe, Pavan; Mathur, Deepak
2016-01-01
Background Early human migration is largely determined by geography and human needs. These are both deterministic parameters when small populations move into unoccupied areas where conflicts and large group dynamics are not important. The early period of human migration into the British Isles provides such a laboratory which, because of its relative geographical isolation, may allow some insights into the complex dynamics of early human migration and interaction. Method and Results We developed a simulation code based on human affinity to habitable land, as defined by availability of water sources, altitude, and flatness of land, in choosing the path of migration. Movement of people on the British island over the prehistoric period from their initial entry points was simulated on the basis of data from the megalithic period. Topographical and hydro-shed data from satellite databases was used to define habitability, based on distance from water bodies, flatness of the terrain, and altitude above sea level. We simulated population movement based on assumptions of affinity for more habitable places, with the rate of movement tempered by existing populations. We compared results of our computer simulations with genetic data and show that our simulation can predict fairly accurately the points of contacts between different migratory paths. Such comparison also provides more detailed information about the path of peoples’ movement over ~2000 years before the present era. Conclusions We demonstrate an accurate method to simulate prehistoric movements of people based upon current topographical satellite data. Our findings are validated by recently-available genetic data. Our method may prove useful in determining early human population dynamics even when no genetic information is available. PMID:27148959
Ingham, Steven C; Fanslau, Melody A; Burnham, Greg M; Ingham, Barbara H; Norback, John P; Schaffner, Donald W
2007-06-01
A computer-based tool (available at: www.wisc.edu/foodsafety/meatresearch) was developed for predicting pathogen growth in raw pork, beef, and poultry meat. The tool, THERM (temperature history evaluation for raw meats), predicts the growth of pathogens in pork and beef (Escherichia coli O157:H7, Salmonella serovars, and Staphylococcus aureus) and on poultry (Salmonella serovars and S. aureus) during short-term temperature abuse. The model was developed as follows: 25-g samples of raw ground pork, beef, and turkey were inoculated with a five-strain cocktail of the target pathogen(s) and held at isothermal temperatures from 10 to 43.3 degrees C. Log CFU per sample data were obtained for each pathogen and used to determine lag-phase duration (LPD) and growth rate (GR) by DMFit software. The LPD and GR were used to develop the THERM predictive tool, into which chronological time and temperature data for raw meat processing and storage are entered. The THERM tool then predicts a delta log CFU value for the desired pathogen-product combination. The accuracy of THERM was tested in 20 different inoculation experiments that involved multiple products (coarse-ground beef, skinless chicken breast meat, turkey scapula meat, and ground turkey) and temperature-abuse scenarios. With the time-temperature data from each experiment, THERM accurately predicted the pathogen growth and no growth (with growth defined as delta log CFU > 0.3) in 67, 85, and 95% of the experiments with E. coli 0157:H7, Salmonella serovars, and S. aureus, respectively, and yielded fail-safe predictions in the remaining experiments. We conclude that THERM is a useful tool for qualitatively predicting pathogen behavior (growth and no growth) in raw meats. Potential applications include evaluating process deviations and critical limits under the HACCP (hazard analysis critical control point) system.
Maternal Mind-Mindedness Provides a Buffer for Pre-Adolescents at Risk for Disruptive Behavior.
Hughes, Claire; Aldercotte, Amanda; Foley, Sarah
2017-02-01
Maternal mind-mindedness, defined as the propensity to view one's child as an agent with independent thoughts and feelings, mitigates the impact of low maternal education on conduct problems in young children (Meins et al. 2013), but has been little studied beyond the preschool years. Addressing this gap, we applied a multi-measure and multi-informant approach to assess family adversity and disruptive behavior at age 12 for a socially diverse sample of 116 children for whom ratings of disruptive behavior at age 6 were available. Each mother was asked to describe her child and transcripts of these five-minute speech samples were coded for (i) mind-mindedness (defined by the proportion of child attributes that were mental rather than physical or behavioral) and (ii) positivity (defined by the proportion of child attributes that were positive rather than neutral or negative). Our regression results showed that, independent of associations with prior adjustment, family adversity, child gender and low maternal monitoring, mothers' mind-mindedness (but not positivity) predicted unique variance in disruptive behavior at age 12. In addition, a trend interaction term provided partial support for the hypothesis that pre-adolescents exposed to family adversity may benefit in particular from maternal mind-mindedness. We discuss the possible mechanisms underpinning these findings and their implications for clinical interventions to reduce disruptive behavior in adolescence.
Non-parametric adaptative JPEG fragments carving
NASA Astrophysics Data System (ADS)
Amrouche, Sabrina Cherifa; Salamani, Dalila
2018-04-01
The most challenging JPEG recovery tasks arise when the file header is missing. In this paper we propose to use a two layer machine learning model to restore headerless JPEG images. We first build a classifier able to identify the structural properties of the images/fragments and then use an AutoEncoder (AE) to learn the fragment features for the header prediction. We define a JPEG universal header and the remaining free image parameters (Height, Width) are predicted with a Gradient Boosting Classifier. Our approach resulted in 90% accuracy using the manually defined features and 78% accuracy using the AE features.
NASA Astrophysics Data System (ADS)
Wells, N. S.; Clough, T. J.; Johnson-Beebout, S. E.; Buresh, R. J.
2010-12-01
Accurate prediction of the available nitrogen (N) pool in submerged paddy soils is needed in order to produce rice, one of the world’s most essential crops, in an economically and environmentally sustainable manner. By applying emerging nitrate dual-isotope (δ15N- δ18O- NO3-) techniques to paddy systems, we were able to obtain a unique process-level quantification of the synergistic impacts of carbon (C) and water management on N availability. Soil and water samples were collected from fallow experimental plots, with or without organic C amendments, that were maintained under 1 of 3 different hydrologic regimens: continuously submerged, water excluded, or alternate wetting and drying. In continuously submerged soils the δ15N-NO3- : δ18O-NO3- signal of denitrification was not present, indicating that there was no N attenuation. Biological nitrogen fixation (BNF) was the dominant factor in defining the available N pool under these conditions, with δ15N-NO3- approaching atmospheric levels as size of the pool increased. Using an isotope-based pool-mixing model, it was calculated that 10±2 µg N g-1 soil were contributed by BNF during the fallow. A lack of BNF combined with removal via denitrification (δ15N-NO3- : δ18O-NO3- = 1) caused relatively lower available N levels in dried and alternate wetting-drying soils during this period. Magnitude and net impact of denitrification was defined by the extent of drying and C availability, with rice straw C additions driving tighter coupling of nitrification and denitrification (δ15N:δ18O <1). However, despite high rates of attenuation during wetting events, soils that had been completely dried and received straw amendments ultimately retained a significantly larger available N pool due to enhanced input from soil organic matter. These findings underline the necessity of, and validate a new means for, accurate quantification micro-scale biogeochemical interactions for developing farm-scale management practices that can maximize N storage and minimize environmentally undesirable losses.
ERIC Educational Resources Information Center
Kadhi, T.; Rudley, D.; Holley, D.; Krishna, K.; Ogolla, C.; Rene, E.; Green, T.
2010-01-01
The following report of descriptive statistics addresses the attendance of the 2012 class and the average Actual and Predicted 1L Grade Point Averages (GPAs). Correlational and Inferential statistics are also run on the variables of Attendance (Y/N), Attendance Number of Times, Actual GPA, and Predictive GPA (Predictive GPA is defined as the Index…
Brown, Joshua B; Gestring, Mark L; Leeper, Christine M; Sperry, Jason L; Peitzman, Andrew B; Billiar, Timothy R; Gaines, Barbara A
2017-06-01
The Injury Severity Score (ISS) is the most commonly used injury scoring system in trauma research and benchmarking. An ISS greater than 15 conventionally defines severe injury; however, no studies evaluate whether ISS performs similarly between adults and children. Our objective was to evaluate ISS and Abbreviated Injury Scale (AIS) to predict mortality and define optimal thresholds of severe injury in pediatric trauma. Patients from the Pennsylvania trauma registry 2000-2013 were included. Children were defined as younger than 16 years. Logistic regression predicted mortality from ISS for children and adults. The optimal ISS cutoff for mortality that maximized diagnostic characteristics was determined in children. Regression also evaluated the association between mortality and maximum AIS in each body region, controlling for age, mechanism, and nonaccidental trauma. Analysis was performed in single and multisystem injuries. Sensitivity analyses with alternative outcomes were performed. Included were 352,127 adults and 50,579 children. Children had similar predicted mortality at ISS of 25 as adults at ISS of 15 (5%). The optimal ISS cutoff in children was ISS greater than 25 and had a positive predictive value of 19% and negative predictive value of 99% compared to a positive predictive value of 7% and negative predictive value of 99% for ISS greater than 15 to predict mortality. In single-system-injured children, mortality was associated with head (odds ratio, 4.80; 95% confidence interval, 2.61-8.84; p < 0.01) and chest AIS (odds ratio, 3.55; 95% confidence interval, 1.81-6.97; p < 0.01), but not abdomen, face, neck, spine, or extremity AIS (p > 0.05). For multisystem injury, all body region AIS scores were associated with mortality except extremities. Sensitivity analysis demonstrated ISS greater than 23 to predict need for full trauma activation, and ISS greater than 26 to predict impaired functional independence were optimal thresholds. An ISS greater than 25 may be a more appropriate definition of severe injury in children. Pattern of injury is important, as only head and chest injury drive mortality in single-system-injured children. These findings should be considered in benchmarking and performance improvement efforts. Epidemiologic study, level III.
Coupling among Microbial Communities, Biogeochemistry, and Mineralogy across Biogeochemical Facies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stegen, James C.; Konopka, Allan; McKinely, Jim
Physical properties of sediments are commonly used to define subsurface lithofacies and these same physical properties influence subsurface microbial communities. This suggests an (unexploited) opportunity to use the spatial distribution of facies to predict spatial variation in biogeochemically relevant microbial attributes. Here, we characterize three biogeochemical facies—oxidized, reduced, and transition—within one lithofacies and elucidate relationships among facies features and microbial community biomass, diversity, and community composition. Consistent with previous observations of biogeochemical hotspots at environmental transition zones, we find elevated biomass within a biogeochemical facies that occurred at the transition between oxidized and reduced biogeochemical facies. Microbial diversity—the number ofmore » microbial taxa—was lower within the reduced facies and was well-explained by a combination of pH and mineralogy. Null modeling revealed that microbial community composition was influenced by ecological selection imposed by redox state and mineralogy, possibly due to effects on nutrient availability or transport. As an illustrative case, we predict microbial biomass concentration across a three-dimensional spatial domain by coupling the spatial distribution of subsurface biogeochemical facies with biomass-facies relationships revealed here. We expect that merging such an approach with hydro-biogeochemical models will provide important constraints on simulated dynamics, thereby reducing uncertainty in model predictions.« less
Interactions among invasive plants: Lessons from Hawai‘i
D'Antonio, Carla M.; Ostertag, Rebecca; Cordell, Susan; Yelenik, Stephanie G.
2017-01-01
Most ecosystems have multiple-plant invaders rather than single-plant invaders, yet ecological studies and management actions focus largely on single invader species. There is a need for general principles regarding invader interactions across varying environmental conditions, so that secondary invasions can be anticipated and managers can allocate resources toward pretreatment or postremoval actions. By reviewing removal experiments conducted in three Hawaiian ecosystems (a dry tropical forest, a seasonally dry mesic forest, and a lowland wet forest), we evaluate the roles environmental harshness, priority effects, productivity potential, and species interactions have in influencing secondary invasions, defined here as invasions that are influenced either positively (facilitation) or negatively (inhibition/priority effects) by existing invaders. We generate a conceptual model with a surprise index to describe whether long-term plant invader composition and dominance is predictable or stochastic after a system perturbation such as a removal experiment. Under extremely low resource availability, the surprise index is low, whereas under intermediate-level resource environments, invader dominance is more stochastic and the surprise index is high. At high resource levels, the surprise index is intermediate: Invaders are likely abundant in the environment but their response to a perturbation is more predictable than at intermediate resource levels. We suggest further testing across environmental gradients to determine key variables that dictate the predictability of postremoval invader composition.
Maternal HIV-1 envelope–specific antibody responses and reduced risk of perinatal transmission
Permar, Sallie R.; Fong, Youyi; Vandergrift, Nathan; Fouda, Genevieve G.; Gilbert, Peter; Parks, Robert; Jaeger, Frederick H.; Pollara, Justin; Martelli, Amanda; Liebl, Brooke E.; Lloyd, Krissey; Yates, Nicole L.; Overman, R. Glenn; Shen, Xiaoying; Whitaker, Kaylan; Chen, Haiyan; Pritchett, Jamie; Solomon, Erika; Friberg, Emma; Marshall, Dawn J.; Whitesides, John F.; Gurley, Thaddeus C.; Von Holle, Tarra; Martinez, David R.; Cai, Fangping; Kumar, Amit; Xia, Shi-Mao; Lu, Xiaozhi; Louzao, Raul; Wilkes, Samantha; Datta, Saheli; Sarzotti-Kelsoe, Marcella; Liao, Hua-Xin; Ferrari, Guido; Alam, S. Munir; Montefiori, David C.; Denny, Thomas N.; Moody, M. Anthony; Tomaras, Georgia D.; Gao, Feng; Haynes, Barton F.
2015-01-01
Despite the wide availability of antiretroviral drugs, more than 250,000 infants are vertically infected with HIV-1 annually, emphasizing the need for additional interventions to eliminate pediatric HIV-1 infections. Here, we aimed to define humoral immune correlates of risk of mother-to-child transmission (MTCT) of HIV-1, including responses associated with protection in the RV144 vaccine trial. Eighty-three untreated, HIV-1–transmitting mothers and 165 propensity score–matched nontransmitting mothers were selected from the Women and Infants Transmission Study (WITS) of US nonbreastfeeding, HIV-1–infected mothers. In a multivariable logistic regression model, the magnitude of the maternal IgG responses specific for the third variable loop (V3) of the HIV-1 envelope was predictive of a reduced risk of MTCT. Neutralizing Ab responses against easy-to-neutralize (tier 1) HIV-1 strains also predicted a reduced risk of peripartum transmission in secondary analyses. Moreover, recombinant maternal V3–specific IgG mAbs mediated neutralization of autologous HIV-1 isolates. Thus, common V3-specific Ab responses in maternal plasma predicted a reduced risk of MTCT and mediated autologous virus neutralization, suggesting that boosting these maternal Ab responses may further reduce HIV-1 MTCT. PMID:26053661
The TESIS Project: Revealing Massive Early-Type Galaxies at z > 1
NASA Astrophysics Data System (ADS)
Saracco, P.; Longhetti, M.; Severgnini, P.; Della Ceca, R.; Braito, V.; Bender, R.; Drory, N.; Feulner, G.; Hopp, U.; Mannucci, F.; Maraston, C.
How and when present-day massive early-type galaxies built up and what type of evolution has characterized their growth (star formation and/or merging) still remain open issues. The different competing scenarios of galaxy formation predict much different properties of early-type galaxies at z > 1. The "monolithic" collapse predicts that massive spheroids formed at high redshift (z > 2.5-3) and that their comoving density is constant at z < 2.5-3 since they evolve only in luminosity. On the contrary, in the hierarchical scenario massive spheroids are built up through subsequent mergers reaching their final masses at z < 1.5 [3,5]. As a consequence, massive systems are very rare at z > 1, their comoving density decreases from z = 0 to z ~ 1.5 and they should experience their last burst of star formation at z < 1.5, concurrent with the merging event(s) of their formation. These opposed predicted properties of early-types at z > 1 can be probed observationally once a well defined sample of massive early-types at z > 1 is available. We are constructing such a sample through a dedicated near-IR very low resolution (λ/Δλ≃50) spectroscopic survey (TNG EROs Spectroscopic Identification Survey, TESIS, [6]) of a complete sample of 30 bright (K < 18.5) Extremely Red Objects (EROs).
Dantan, Etienne; Combescure, Christophe; Lorent, Marine; Ashton-Chess, Joanna; Daguin, Pascal; Classe, Jean-Marc; Giral, Magali; Foucher, Yohann
2014-04-01
Predicting chronic disease evolution from a prognostic marker is a key field of research in clinical epidemiology. However, the prognostic capacity of a marker is not systematically evaluated using the appropriate methodology. We proposed the use of simple equations to calculate time-dependent sensitivity and specificity based on published survival curves and other time-dependent indicators as predictive values, likelihood ratios, and posttest probability ratios to reappraise prognostic marker accuracy. The methodology is illustrated by back calculating time-dependent indicators from published articles presenting a marker as highly correlated with the time to event, concluding on the high prognostic capacity of the marker, and presenting the Kaplan-Meier survival curves. The tools necessary to run these direct and simple computations are available online at http://www.divat.fr/en/online-calculators/evalbiom. Our examples illustrate that published conclusions about prognostic marker accuracy may be overoptimistic, thus giving potential for major mistakes in therapeutic decisions. Our approach should help readers better evaluate clinical articles reporting on prognostic markers. Time-dependent sensitivity and specificity inform on the inherent prognostic capacity of a marker for a defined prognostic time. Time-dependent predictive values, likelihood ratios, and posttest probability ratios may additionally contribute to interpret the marker's prognostic capacity. Copyright © 2014 Elsevier Inc. All rights reserved.
Predicted Hematologic and Plasma Volume Responses Following Rapid Ascent to Progressive Altitudes
2014-06-01
of these changes, and define baseline demographics and physiologic descriptors that are important in predicting these changes. The overall impact of... physiologic descriptors that are important in predicting these changes. Using general linear mixed models and a comprehensive relational database...accomplished using a comprehensive relational database containing individual ascent profiles, demographics, and physiologic subject descriptors as well as
NASA Technical Reports Server (NTRS)
Swanson, David J.
1990-01-01
The electromagnetic interference prediction problem is characteristically ill-defined and complicated. Severe EMI problems are prevalent throughout the U.S. Navy, causing both expected and unexpected impacts on the operational performance of electronic combat systems onboard ships. This paper focuses on applying artificial intelligence (AI) technology to the prediction of ship related electromagnetic interference (EMI) problems.
City traffic flow breakdown prediction based on fuzzy rough set
NASA Astrophysics Data System (ADS)
Yang, Xu; Da-wei, Hu; Bing, Su; Duo-jia, Zhang
2017-05-01
In city traffic management, traffic breakdown is a very important issue, which is defined as a speed drop of a certain amount within a dense traffic situation. In order to predict city traffic flow breakdown accurately, in this paper, we propose a novel city traffic flow breakdown prediction algorithm based on fuzzy rough set. Firstly, we illustrate the city traffic flow breakdown problem, in which three definitions are given, that is, 1) Pre-breakdown flow rate, 2) Rate, density, and speed of the traffic flow breakdown, and 3) Duration of the traffic flow breakdown. Moreover, we define a hazard function to represent the probability of the breakdown ending at a given time point. Secondly, as there are many redundant and irrelevant attributes in city flow breakdown prediction, we propose an attribute reduction algorithm using the fuzzy rough set. Thirdly, we discuss how to predict the city traffic flow breakdown based on attribute reduction and SVM classifier. Finally, experiments are conducted by collecting data from I-405 Freeway, which is located at Irvine, California. Experimental results demonstrate that the proposed algorithm is able to achieve lower average error rate of city traffic flow breakdown prediction.
Regeneration of southern hardwoods: some ecological concepts
David L. Loftis
1989-01-01
Classical concepts of post-disturbance succession through well-defined seral stages to a well-defined ,climax stage( s) are not a useful conceptual framework for predicting species composition of regeneration resulting from the application of regeneration treatments in complex southern hardwood forests. Hardwood regeneration can be better understood, and more useful...
For QSAR and QSPR modeling of biological and physicochemical properties, estimating the accuracy of predictions is a critical problem. The “distance to model” (DM) can be defined as a metric that defines the similarity between the training set molecules and the test set compound ...
Do Traditional Admissions Criteria Reflect Applicant Creativity?
ERIC Educational Resources Information Center
Pretz, Jean E.; Kaufman, James C.
2017-01-01
College admissions decisions have traditionally focused on high school academic performance and standardized test scores. An ongoing debate is the validity of these measures for predicting success in college; part of this debate includes how success is defined. One potential way of defining college success is a student's creative accomplishments.…
NASA Astrophysics Data System (ADS)
Liemohn, M. W.; Welling, D. T.; De Zeeuw, D.; Kuznetsova, M. M.; Rastaetter, L.; Ganushkina, N. Y.; Ilie, R.; Toth, G.; Gombosi, T. I.; van der Holst, B.
2016-12-01
The ground-based magnetometer index Dst is a decent measure of the near-Earth current systems, in particular those in the storm-time inner magnetosphere. The ability of a large-scale, physics-based model to reproduce, or even predict, this index is therefore a tangible measure of the overall validity of the code for space weather research and space weather operational usage. Experimental real-time simulations of the Space Weather Modeling Framework (SWMF) are conducted at the Community Coordinated Modeling Center (CCMC), with results available there (http://ccmc.gsfc.nasa.gov/realtime.php), through the CCMC Integrated Space Weather Analysis (iSWA) site (http://iswa.ccmc.gsfc.nasa.gov/IswaSystemWebApp/), and the Michigan SWMF site (http://csem.engin.umich.edu/realtime). Presently, two configurations of the SWMF are running in real time at CCMC, both focusing on the geospace modules, using the BATS-R-US magnetohydrodynamic model, the Ridley Ionosphere Model, and with and without the Rice Convection Model for inner magnetospheric drift physics. While both have been running for several years, nearly continuous results are available since July 2015. Dst from the model output is compared against the Kyoto real-time Dst. Various quantitative measures are presented to assess the goodness of fit between the models and observations. In particular, correlation coefficients, RMSE and prediction efficiency are calculated and discussed. In addition, contingency tables are presented, demonstrating the ability of the model to predict "disturbed times" as defined by Dst values below some critical threshold. It is shown that the SWMF run with the inner magnetosphere model is significantly better at reproducing storm-time values, with prediction efficiencies above 0.25 and Heidke skill scores above 0.5. This work was funded by NASA and NSF grants, and the European Union's Horizon 2020 research and innovation programme under grant agreement 637302 PROGRESS.
Smartphone technologies and Bayesian networks to assess shorebird habitat selection
Zeigler, Sara; Thieler, E. Robert; Gutierrez, Ben; Plant, Nathaniel G.; Hines, Megan K.; Fraser, James D.; Catlin, Daniel H.; Karpanty, Sarah M.
2017-01-01
Understanding patterns of habitat selection across a species’ geographic distribution can be critical for adequately managing populations and planning for habitat loss and related threats. However, studies of habitat selection can be time consuming and expensive over broad spatial scales, and a lack of standardized monitoring targets or methods can impede the generalization of site-based studies. Our objective was to collaborate with natural resource managers to define available nesting habitat for piping plovers (Charadrius melodus) throughout their U.S. Atlantic coast distribution from Maine to North Carolina, with a goal of providing science that could inform habitat management in response to sea-level rise. We characterized a data collection and analysis approach as being effective if it provided low-cost collection of standardized habitat-selection data across the species’ breeding range within 1–2 nesting seasons and accurate nesting location predictions. In the method developed, >30 managers and conservation practitioners from government agencies and private organizations used a smartphone application, “iPlover,” to collect data on landcover characteristics at piping plover nest locations and random points on 83 beaches and barrier islands in 2014 and 2015. We analyzed these data with a Bayesian network that predicted the probability a specific combination of landcover variables would be associated with a nesting site. Although we focused on a shorebird, our approach can be modified for other taxa. Results showed that the Bayesian network performed well in predicting habitat availability and confirmed predicted habitat preferences across the Atlantic coast breeding range of the piping plover. We used the Bayesian network to map areas with a high probability of containing nesting habitat on the Rockaway Peninsula in New York, USA, as an example application. Our approach facilitated the collation of evidence-based information on habitat selection from many locations and sources, which can be used in management and decision-making applications.
Shaw, Paul B; Donovan, Jennifer L; Tran, Maichi T; Lemon, Stephenie C; Burgwinkle, Pamela; Gore, Joel
2010-08-01
The objectives of this retrospective cohort study are to evaluate the accuracy of pharmacogenetic warfarin dosing algorithms in predicting therapeutic dose and to determine if this degree of accuracy warrants the routine use of genotyping to prospectively dose patients newly started on warfarin. Seventy-one patients of an outpatient anticoagulation clinic at an academic medical center who were age 18 years or older on a stable, therapeutic warfarin dose with international normalized ratio (INR) goal between 2.0 and 3.0, and cytochrome P450 isoenzyme 2C9 (CYP2C9) and vitamin K epoxide reductase complex subunit 1 (VKORC1) genotypes available between January 1, 2007 and September 30, 2008 were included. Six pharmacogenetic warfarin dosing algorithms were identified from the medical literature. Additionally, a 5 mg fixed dose approach was evaluated. Three algorithms, Zhu et al. (Clin Chem 53:1199-1205, 2007), Gage et al. (J Clin Ther 84:326-331, 2008), and International Warfarin Pharmacogenetic Consortium (IWPC) (N Engl J Med 360:753-764, 2009) were similar in the primary accuracy endpoints with mean absolute error (MAE) ranging from 1.7 to 1.8 mg/day and coefficient of determination R (2) from 0.61 to 0.66. However, the Zhu et al. algorithm severely over-predicted dose (defined as >or=2x or >or=2 mg/day more than actual dose) in twice as many (14 vs. 7%) patients as Gage et al. 2008 and IWPC 2009. In conclusion, the algorithms published by Gage et al. 2008 and the IWPC 2009 were the two most accurate pharmacogenetically based equations available in the medical literature in predicting therapeutic warfarin dose in our study population. However, the degree of accuracy demonstrated does not support the routine use of genotyping to prospectively dose all patients newly started on warfarin.
2010-11-01
defined herein as terrain whose surface deformation due to a single vehicle traversing the surface is negligible, such as paved roads (both asphalt ...ground vehicle reliability predictions. Current application of this work is limited to the analysis of U.S. Highways, comprised of both asphalt and...Highways that are consistent between asphalt and concrete roads b. The principle terrain characteristics are defined with analytic basis vectors
Duthie, A. Bradley; Bocedi, Greta; Reid, Jane M.
2016-01-01
Polyandry is often hypothesized to evolve to allow females to adjust the degree to which they inbreed. Multiple factors might affect such evolution, including inbreeding depression, direct costs, constraints on male availability, and the nature of polyandry as a threshold trait. Complex models are required to evaluate when evolution of polyandry to adjust inbreeding is predicted to arise. We used a genetically explicit individual‐based model to track the joint evolution of inbreeding strategy and polyandry defined as a polygenic threshold trait. Evolution of polyandry to avoid inbreeding only occurred given strong inbreeding depression, low direct costs, and severe restrictions on initial versus additional male availability. Evolution of polyandry to prefer inbreeding only occurred given zero inbreeding depression and direct costs, and given similarly severe restrictions on male availability. However, due to its threshold nature, phenotypic polyandry was frequently expressed even when strongly selected against and hence maladaptive. Further, the degree to which females adjusted inbreeding through polyandry was typically very small, and often reflected constraints on male availability rather than adaptive reproductive strategy. Evolution of polyandry solely to adjust inbreeding might consequently be highly restricted in nature, and such evolution cannot necessarily be directly inferred from observed magnitudes of inbreeding adjustment. PMID:27464756
Quantifying the relationship between sequence and three-dimensional structure conservation in RNA
2010-01-01
Background In recent years, the number of available RNA structures has rapidly grown reflecting the increased interest on RNA biology. Similarly to the studies carried out two decades ago for proteins, which gave the fundamental grounds for developing comparative protein structure prediction methods, we are now able to quantify the relationship between sequence and structure conservation in RNA. Results Here we introduce an all-against-all sequence- and three-dimensional (3D) structure-based comparison of a representative set of RNA structures, which have allowed us to quantitatively confirm that: (i) there is a measurable relationship between sequence and structure conservation that weakens for alignments resulting in below 60% sequence identity, (ii) evolution tends to conserve more RNA structure than sequence, and (iii) there is a twilight zone for RNA homology detection. Discussion The computational analysis here presented quantitatively describes the relationship between sequence and structure for RNA molecules and defines a twilight zone region for detecting RNA homology. Our work could represent the theoretical basis and limitations for future developments in comparative RNA 3D structure prediction. PMID:20550657
Diky, Vladimir; Chirico, Robert D; Kazakov, Andrei F; Muzny, Chris D; Magee, Joseph W; Abdulagatov, Ilmutdin; Kang, Jeong Won; Kroenlein, Kenneth; Frenkel, Michael
2011-01-24
ThermoData Engine (TDE) is the first full-scale software implementation of the dynamic data evaluation concept, as reported recently in this journal. In the present paper, we describe development of an algorithmic approach to assist experiment planning through assessment of the existing body of knowledge, including availability of experimental thermophysical property data, variable ranges studied, associated uncertainties, state of prediction methods, and parameters for deployment of prediction methods and how these parameters can be obtained using targeted measurements, etc., and, indeed, how the intended measurement may address the underlying scientific or engineering problem under consideration. A second new feature described here is the application of the software capabilities for aid in the design of chemical products through identification of chemical systems possessing desired values of thermophysical properties within defined ranges of tolerance. The algorithms and their software implementation to achieve this are described. Finally, implementation of a new data validation and weighting system is described for vapor-liquid equilibrium (VLE) data, and directions for future enhancements are outlined.
Uncertainties in (E)UV model atmosphere fluxes
NASA Astrophysics Data System (ADS)
Rauch, T.
2008-04-01
Context: During the comparison of synthetic spectra calculated with two NLTE model atmosphere codes, namely TMAP and TLUSTY, we encounter systematic differences in the EUV fluxes due to the treatment of level dissolution by pressure ionization. Aims: In the case of Sirius B, we demonstrate an uncertainty in modeling the EUV flux reliably in order to challenge theoreticians to improve the theory of level dissolution. Methods: We calculated synthetic spectra for hot, compact stars using state-of-the-art NLTE model-atmosphere techniques. Results: Systematic differences may occur due to a code-specific cutoff frequency of the H I Lyman bound-free opacity. This is the case for TMAP and TLUSTY. Both codes predict the same flux level at wavelengths lower than about 1500 Å for stars with effective temperatures (T_eff) below about 30 000 K only, if the same cutoff frequency is chosen. Conclusions: The theory of level dissolution in high-density plasmas, which is available for hydrogen only should be generalized to all species. Especially, the cutoff frequencies for the bound-free opacities should be defined in order to make predictions of UV fluxes more reliable.
An Artificial Intelligence System to Predict Quality of Service in Banking Organizations
Popovič, Aleš
2016-01-01
Quality of service, that is, the waiting time that customers must endure in order to receive a service, is a critical performance aspect in private and public service organizations. Providing good service quality is particularly important in highly competitive sectors where similar services exist. In this paper, focusing on banking sector, we propose an artificial intelligence system for building a model for the prediction of service quality. While the traditional approach used for building analytical models relies on theories and assumptions about the problem at hand, we propose a novel approach for learning models from actual data. Thus, the proposed approach is not biased by the knowledge that experts may have about the problem, but it is completely based on the available data. The system is based on a recently defined variant of genetic programming that allows practitioners to include the concept of semantics in the search process. This will have beneficial effects on the search process and will produce analytical models that are based only on the data and not on domain-dependent knowledge. PMID:27313604
An Artificial Intelligence System to Predict Quality of Service in Banking Organizations.
Castelli, Mauro; Manzoni, Luca; Popovič, Aleš
2016-01-01
Quality of service, that is, the waiting time that customers must endure in order to receive a service, is a critical performance aspect in private and public service organizations. Providing good service quality is particularly important in highly competitive sectors where similar services exist. In this paper, focusing on banking sector, we propose an artificial intelligence system for building a model for the prediction of service quality. While the traditional approach used for building analytical models relies on theories and assumptions about the problem at hand, we propose a novel approach for learning models from actual data. Thus, the proposed approach is not biased by the knowledge that experts may have about the problem, but it is completely based on the available data. The system is based on a recently defined variant of genetic programming that allows practitioners to include the concept of semantics in the search process. This will have beneficial effects on the search process and will produce analytical models that are based only on the data and not on domain-dependent knowledge.
Fragrances Categorized According to Relative Human Skin Sensitization Potency
Api, Anne Marie; Parakhia, Rahul; O'Brien, Devin; Basketter, David A.
2017-01-01
Background The development of non-animal alternatives for skin sensitization potency prediction is dependent upon the availability of a sufficient dataset whose human potency is well characterized. Previously, establishment of basic categorization criteria for 6 defined potency categories, allowed 131 substances to be allocated into them entirely on the basis of human information. Objectives To supplement the original dataset with an extended range of fragrance substances. Methods A more fully described version of the original criteria was used to assess 89 fragrance chemicals, allowing their allocation into one of the 6 potency categories. Results None of the fragrance substances were assigned to the most potent group, category 1, whereas 11 were category 2, 22 were category 3, 37 were category 4, and 19 were category 5. Although none were identified as non-sensitizing, note that substances in category 5 also do not pass the threshold for regulatory classification. Conclusions The combined datasets of >200 substances placed into potency categories solely on the basis of human data provides an essential resource for the elaboration and evaluation of predictive non-animal methods. PMID:28691948
Using a bias aware EnKF to account for unresolved structure in an unsaturated zone model
NASA Astrophysics Data System (ADS)
Erdal, D.; Neuweiler, I.; Wollschläger, U.
2014-01-01
When predicting flow in the unsaturated zone, any method for modeling the flow will have to define how, and to what level, the subsurface structure is resolved. In this paper, we use the Ensemble Kalman Filter to assimilate local soil water content observations from both a synthetic layered lysimeter and a real field experiment in layered soil in an unsaturated water flow model. We investigate the use of colored noise bias corrections to account for unresolved subsurface layering in a homogeneous model and compare this approach with a fully resolved model. In both models, we use a simplified model parameterization in the Ensemble Kalman Filter. The results show that the use of bias corrections can increase the predictive capability of a simplified homogeneous flow model if the bias corrections are applied to the model states. If correct knowledge of the layering structure is available, the fully resolved model performs best. However, if no, or erroneous, layering is used in the model, the use of a homogeneous model with bias corrections can be the better choice for modeling the behavior of the system.
Board-invited review: Using behavior to predict and identify ill health in animals.
Weary, D M; Huzzey, J M; von Keyserlingk, M A G
2009-02-01
We review recent research in one of the oldest and most important applications of ethology: evaluating animal health. Traditionally, such evaluations have been based on subjective assessments of debilitative signs; animals are judged ill when they appear depressed or off feed. Such assessments are prone to error but can be dramatically improved with training using well-defined clinical criteria. The availability of new technology to automatically record behaviors allows for increased use of objective measures; automated measures of feeding behavior and intake are increasingly available in commercial agriculture, and recent work has shown these to be valuable indicators of illness. Research has also identified behaviors indicative of risk of disease or injury. For example, the time spent standing on wet, concrete surfaces can be used to predict susceptibility to hoof injuries in dairy cattle, and time spent nuzzling the udder of the sow can predict the risk of crushing in piglets. One conceptual advance has been to view decreased exploration, feeding, social, sexual, and other behaviors as a coordinated response that helps afflicted individuals recover from illness. We argue that the sickness behaviors most likely to decline are those that provide longer-term fitness benefits (such as play), as animals divert resources to those functions of critical short-term value such as maintaining body temperature. We urge future research assessing the strength of motivation to express sickness behaviors, allowing for quantitative estimates of how sick an animal feels. Finally, we call for new theoretical and empirical work on behaviors that may act to signal health status, including behaviors that have evolved as honest (i.e., reliable) signals of condition for offspring-parent, inter- and intra-sexual, and predator-prey communication.
Evolving hard problems: Generating human genetics datasets with a complex etiology.
Himmelstein, Daniel S; Greene, Casey S; Moore, Jason H
2011-07-07
A goal of human genetics is to discover genetic factors that influence individuals' susceptibility to common diseases. Most common diseases are thought to result from the joint failure of two or more interacting components instead of single component failures. This greatly complicates both the task of selecting informative genetic variants and the task of modeling interactions between them. We and others have previously developed algorithms to detect and model the relationships between these genetic factors and disease. Previously these methods have been evaluated with datasets simulated according to pre-defined genetic models. Here we develop and evaluate a model free evolution strategy to generate datasets which display a complex relationship between individual genotype and disease susceptibility. We show that this model free approach is capable of generating a diverse array of datasets with distinct gene-disease relationships for an arbitrary interaction order and sample size. We specifically generate eight-hundred Pareto fronts; one for each independent run of our algorithm. In each run the predictiveness of single genetic variation and pairs of genetic variants have been minimized, while the predictiveness of third, fourth, or fifth-order combinations is maximized. Two hundred runs of the algorithm are further dedicated to creating datasets with predictive four or five order interactions and minimized lower-level effects. This method and the resulting datasets will allow the capabilities of novel methods to be tested without pre-specified genetic models. This allows researchers to evaluate which methods will succeed on human genetics problems where the model is not known in advance. We further make freely available to the community the entire Pareto-optimal front of datasets from each run so that novel methods may be rigorously evaluated. These 76,600 datasets are available from http://discovery.dartmouth.edu/model_free_data/.
Rainfall prediction methodology with binary multilayer perceptron neural networks
NASA Astrophysics Data System (ADS)
Esteves, João Trevizoli; de Souza Rolim, Glauco; Ferraudo, Antonio Sergio
2018-05-01
Precipitation, in short periods of time, is a phenomenon associated with high levels of uncertainty and variability. Given its nature, traditional forecasting techniques are expensive and computationally demanding. This paper presents a soft computing technique to forecast the occurrence of rainfall in short ranges of time by artificial neural networks (ANNs) in accumulated periods from 3 to 7 days for each climatic season, mitigating the necessity of predicting its amount. With this premise it is intended to reduce the variance, rise the bias of data and lower the responsibility of the model acting as a filter for quantitative models by removing subsequent occurrences of zeros values of rainfall which leads to bias the and reduces its performance. The model were developed with time series from ten agriculturally relevant regions in Brazil, these places are the ones with the longest available weather time series and and more deficient in accurate climate predictions, it was available 60 years of daily mean air temperature and accumulated precipitation which were used to estimate the potential evapotranspiration and water balance; these were the variables used as inputs for the ANNs models. The mean accuracy of the model for all the accumulated periods were 78% on summer, 71% on winter 62% on spring and 56% on autumn, it was identified that the effect of continentality, the effect of altitude and the volume of normal precipitation, have an direct impact on the accuracy of the ANNs. The models have peak performance in well defined seasons, but looses its accuracy in transitional seasons and places under influence of macro-climatic and mesoclimatic effects, which indicates that this technique can be used to indicate the eminence of rainfall with some limitations.
Estimation of stream conditions in tributaries of the Klamath River, northern California
Manhard, Christopher V.; Som, Nicholas A.; Jones, Edward C.; Perry, Russell W.
2018-01-01
Because of their critical ecological role, stream temperature and discharge are requisite inputs for models of salmonid population dynamics. Coho Salmon inhabiting the Klamath Basin spend much of their freshwater life cycle inhabiting tributaries, but environmental data are often absent or only seasonally available at these locations. To address this information gap, we constructed daily averaged water temperature models that used simulated meteorological data to estimate daily tributary temperatures, and we used flow differentials recorded on the mainstem Klamath River to estimate daily tributary discharge. Observed temperature data were available for fourteen of the major salmon bearing tributaries, which enabled estimation of tributary-specific model parameters at those locations. Water temperature data from six mid-Klamath Basin tributaries were used to estimate a global set of parameters for predicting water temperatures in the remaining tributaries. The resulting parameter sets were used to simulate water temperatures for each of 75 tributaries from 1980-2015. Goodness-of-fit statistics computed from a cross-validation analysis demonstrated a high precision of the tributary-specific models in predicting temperature in unobserved years and of the global model in predicting temperatures in unobserved streams. Klamath River discharge has been monitored by four gages that broadly intersperse the 292 kilometers from the Iron Gate Dam to the Klamath River mouth. These gages defined the upstream and downstream margins of three reaches. Daily discharge of tributaries within a reach was estimated from 1980-2015 based on drainage-area proportionate allocations of the discharge differential between the upstream and downstream margin. Comparisons with measured discharge on Indian Creek, a moderate-sized tributary with naturally regulated flows, revealed that the estimates effectively approximated both the variability and magnitude of discharge.
NASA Astrophysics Data System (ADS)
Gallice, Aurélien; Bavay, Mathias; Brauchli, Tristan; Comola, Francesco; Lehning, Michael; Huwald, Hendrik
2016-12-01
Climate change is expected to strongly impact the hydrological and thermal regimes of Alpine rivers within the coming decades. In this context, the development of hydrological models accounting for the specific dynamics of Alpine catchments appears as one of the promising approaches to reduce our uncertainty of future mountain hydrology. This paper describes the improvements brought to StreamFlow, an existing model for hydrological and stream temperature prediction built as an external extension to the physically based snow model Alpine3D. StreamFlow's source code has been entirely written anew, taking advantage of object-oriented programming to significantly improve its structure and ease the implementation of future developments. The source code is now publicly available online, along with a complete documentation. A special emphasis has been put on modularity during the re-implementation of StreamFlow, so that many model aspects can be represented using different alternatives. For example, several options are now available to model the advection of water within the stream. This allows for an easy and fast comparison between different approaches and helps in defining more reliable uncertainty estimates of the model forecasts. In particular, a case study in a Swiss Alpine catchment reveals that the stream temperature predictions are particularly sensitive to the approach used to model the temperature of subsurface flow, a fact which has been poorly reported in the literature to date. Based on the case study, StreamFlow is shown to reproduce hourly mean discharge with a Nash-Sutcliffe efficiency (NSE) of 0.82 and hourly mean temperature with a NSE of 0.78.
2014-01-01
Background Exposure measurement error is a concern in long-term PM2.5 health studies using ambient concentrations as exposures. We assessed error magnitude by estimating calibration coefficients as the association between personal PM2.5 exposures from validation studies and typically available surrogate exposures. Methods Daily personal and ambient PM2.5, and when available sulfate, measurements were compiled from nine cities, over 2 to 12 days. True exposure was defined as personal exposure to PM2.5 of ambient origin. Since PM2.5 of ambient origin could only be determined for five cities, personal exposure to total PM2.5 was also considered. Surrogate exposures were estimated as ambient PM2.5 at the nearest monitor or predicted outside subjects’ homes. We estimated calibration coefficients by regressing true on surrogate exposures in random effects models. Results When monthly-averaged personal PM2.5 of ambient origin was used as the true exposure, calibration coefficients equaled 0.31 (95% CI:0.14, 0.47) for nearest monitor and 0.54 (95% CI:0.42, 0.65) for outdoor home predictions. Between-city heterogeneity was not found for outdoor home PM2.5 for either true exposure. Heterogeneity was significant for nearest monitor PM2.5, for both true exposures, but not after adjusting for city-average motor vehicle number for total personal PM2.5. Conclusions Calibration coefficients were <1, consistent with previously reported chronic health risks using nearest monitor exposures being under-estimated when ambient concentrations are the exposure of interest. Calibration coefficients were closer to 1 for outdoor home predictions, likely reflecting less spatial error. Further research is needed to determine how our findings can be incorporated in future health studies. PMID:24410940
NASA Technical Reports Server (NTRS)
Stanley, D. C.; Huff, T. L.
2003-01-01
The purpose of this research effort was to: (1) provide a concise and well-defined property profile of current and developing composite materials using thermal and chemical characterization techniques and (2) optimize analytical testing requirements of materials. This effort applied a diverse array of methodologies to ascertain composite material properties. Often, a single method of technique will provide useful, but nonetheless incomplete, information on material composition and/or behavior. To more completely understand and predict material properties, a broad-based analytical approach is required. By developing a database of information comprised of both thermal and chemical properties, material behavior under varying conditions may be better understood. THis is even more important in the aerospace community, where new composite materials and those in the development stage have little reference data. For example, Fourier transform infrared (FTIR) spectroscopy spectral databases available for identification of vapor phase spectra, such as those generated during experiments, generally refer to well-defined chemical compounds. Because this method renders a unique thermal decomposition spectral pattern, even larger, more diverse databases, such as those found in solid and liquid phase FTIR spectroscopy libraries, cannot be used. By combining this and other available methodologies, a database specifically for new materials and materials being developed at Marshall Space Flight Center can be generated . In addition, characterizing materials using this approach will be extremely useful in the verification of materials and identification of anomalies in NASA-wide investigations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ranganathan, V; Kumar, P; Bzdusek, K
Purpose: We propose a novel data-driven method to predict the achievability of clinical objectives upfront before invoking the IMRT optimization. Methods: A new metric called “Geometric Complexity (GC)” is used to estimate the achievability of clinical objectives. Here, GC is the measure of the number of “unmodulated” beamlets or rays that intersect the Region-of-interest (ROI) and the target volume. We first compute the geometric complexity ratio (GCratio) between the GC of a ROI (say, parotid) in a reference plan and the GC of the same ROI in a given plan. The GCratio of a ROI indicates the relative geometric complexitymore » of the ROI as compared to the same ROI in the reference plan. Hence GCratio can be used to predict if a defined clinical objective associated with the ROI can be met by the optimizer for a given case. Basically a higher GCratio indicates a lesser likelihood for the optimizer to achieve the clinical objective defined for a given ROI. Similarly, a lower GCratio indicates a higher likelihood for the optimizer to achieve the clinical objective defined for the given ROI. We have evaluated the proposed method on four Head and Neck cases using Pinnacle3 (version 9.10.0) Treatment Planning System (TPS). Results: Out of the total of 28 clinical objectives from four head and neck cases included in the study, 25 were in agreement with the prediction, which implies an agreement of about 85% between predicted and obtained results. The Pearson correlation test shows a positive correlation between predicted and obtained results (Correlation = 0.82, r2 = 0.64, p < 0.005). Conclusion: The study demonstrates the feasibility of the proposed method in head and neck cases for predicting the achievability of clinical objectives with reasonable accuracy.« less
Barlow, Gavin; Nathwani, Dilip; Davey, Peter
2007-01-01
Background The performance of CURB65 in predicting mortality in community‐acquired pneumonia (CAP) has been tested in two large observational studies. However, it has not been tested against generic sepsis and early warning scores, which are increasingly being advocated for identification of high‐risk patients in acute medical wards. Method A retrospective analysis was performed of data prospectively collected for a CAP quality improvement study. The ability to stratify mortality and performance characteristics (sensitivity, specificity, positive predictive value, negative predictive value and area under the receiver operating curve) were calculated for stratifications of CURB65, CRB65, the systemic inflammatory response syndrome (SIRS) criteria and the standardised early warning score (SEWS). Results 419 patients were included in the main analysis with a median age of 74 years (men = 47%). CURB65 and CRB65 stratified mortality in a more clinically useful way and had more favourable operating characteristics than SIRS or SEWS; for example, mortality in low‐risk patients was 2% when defined by CURB65, but 9% when defined by SEWS and 11–17% when defined by variations of the SIRS criteria. The sensitivity, specificity, positive predictive value and negative predictive value of CURB65 was 71%, 69%, 35% and 91%, respectively, compared with 62%, 73%, 35% and 89% for the best performing version of SIRS and 52%, 67%, 27% and 86% for SEWS. CURB65 had the greatest area under the receiver operating curve (0.78 v 0.73 for CRB65, 0.68 for SIRS and 0.64 for SEWS). Conclusions CURB65 should not be supplanted by SIRS or SEWS for initial prognostic assessment in CAP. Further research to identify better generic prognostic tools is required. PMID:16928720
NASA Astrophysics Data System (ADS)
Perez-Saez, Javier; Mande, Theophile; Larsen, Joshua; Ceperley, Natalie; Rinaldo, Andrea
2017-12-01
The transmission of waterborne diseases hinges on the interactions between hydrology and ecology of hosts, vectors and parasites, with the long-term absence of water constituting a strict lower bound. However, the link between spatio-temporal patterns of hydrological ephemerality and waterborne disease transmission is poorly understood and difficult to account for. The use of limited biophysical and hydroclimate information from otherwise data scarce regions is therefore needed to characterize, classify, and predict river network ephemerality in a spatially explicit framework. Here, we develop a novel large-scale ephemerality classification and prediction methodology based on monthly discharge data, water and energy availability, and remote-sensing measures of vegetation, that is relevant to epidemiology, and maintains a mechanistic link to catchment hydrologic processes. Specifically, with reference to the context of Burkina Faso in sub-Saharan Africa, we extract a relevant set of catchment covariates that include the aridity index, annual runoff estimation using the Budyko framework, and hysteretical relations between precipitation and vegetation. Five ephemerality classes, from permanent to strongly ephemeral, are defined from the duration of 0-flow periods that also accounts for the sensitivity of river discharge to the long-lasting drought of the 70's-80's in West Africa. Using such classes, a gradient-boosted tree-based prediction yielded three distinct geographic regions of ephemerality. Importantly, we observe a strong epidemiological association between our predictions of hydrologic ephemerality and the known spatial patterns of schistosomiasis, an endemic parasitic waterborne disease in which infection occurs with human-water contact, and requires aquatic snails as an intermediate host. The general nature of our approach and its relevance for predicting the hydrologic controls on schistosomiasis occurrence provides a pathway for the explicit inclusion of hydrologic drivers within epidemiological models of waterborne disease transmission.
Kraus, Virginia Byers; Feng, Sheng; Wang, ShengChu; White, Scott; Ainslie, Maureen; Brett, Alan; Holmes, Anthony; Charles, H Cecil
2009-12-01
To evaluate the effectiveness of using subchondral bone texture observed on a radiograph taken at baseline to predict progression of knee osteoarthritis (OA) over a 3-year period. A total of 138 participants in the Prediction of Osteoarthritis Progression study were evaluated at baseline and after 3 years. Fractal signature analysis (FSA) of the medial subchondral tibial plateau was performed on fixed flexion radiographs of 248 nonreplaced knees, using a commercially available software tool. OA progression was defined as a change in joint space narrowing (JSN) or osteophyte formation of 1 grade according to a standardized knee atlas. Statistical analysis of fractal signatures was performed using a new model based on correlating the overall shape of a fractal dimension curve with radius. Fractal signature of the medial tibial plateau at baseline was predictive of medial knee JSN progression (area under the curve [AUC] 0.75, of a receiver operating characteristic curve) but was not predictive of osteophyte formation or progression of JSN in the lateral compartment. Traditional covariates (age, sex, body mass index, knee pain), general bone mineral content, and joint space width at baseline were no more effective than random variables for predicting OA progression (AUC 0.52-0.58). The predictive model with maximum effectiveness combined fractal signature at baseline, knee alignment, traditional covariates, and bone mineral content (AUC 0.79). We identified a prognostic marker of OA that is readily extracted from a plain radiograph using FSA. Although the method needs to be validated in a second cohort, our results indicate that the global shape approach to analyzing these data is a potentially efficient means of identifying individuals at risk of knee OA progression.
How Big Is It Really? Assessing the Efficacy of Indirect Estimates of Body Size in Asian Elephants.
Chapman, Simon N; Mumby, Hannah S; Crawley, Jennie A H; Mar, Khyne U; Htut, Win; Thura Soe, Aung; Aung, Htoo Htoo; Lummaa, Virpi
2016-01-01
Information on an organism's body size is pivotal in understanding its life history and fitness, as well as helping inform conservation measures. However, for many species, particularly large-bodied wild animals, taking accurate body size measurements can be a challenge. Various means to estimate body size have been employed, from more direct methods such as using photogrammetry to obtain height or length measurements, to indirect prediction of weight using other body morphometrics or even the size of dung boli. It is often unclear how accurate these measures are because they cannot be compared to objective measures. Here, we investigate how well existing estimation equations predict the actual body weight of Asian elephants Elephas maximus, using body measurements (height, chest girth, length, foot circumference and neck circumference) taken directly from a large population of semi-captive animals in Myanmar (n = 404). We then define new and better fitting formulas to predict body weight in Myanmar elephants from these readily available measures. We also investigate whether the important parameters height and chest girth can be estimated from photographs (n = 151). Our results show considerable variation in the ability of existing estimation equations to predict weight, and that the equations proposed in this paper predict weight better in almost all circumstances. We also find that measurements from standardised photographs reflect body height and chest girth after applying minor adjustments. Our results have implications for size estimation of large wild animals in the field, as well as for management in captive settings.
Wiswell, Jeffrey; Tsao, Kenyon; Bellolio, M Fernanda; Hess, Erik P; Cabrera, Daniel
2013-10-01
System 1 decision-making is fast, resource economic, and intuitive (eg, "your gut feeling") and System 2 is slow, resource intensive, and analytic (eg, "hypothetico-deductive"). We evaluated the performance of disposition and acuity prediction by emergency physicians (EPs) using a System 1 decision-making process. We conducted a prospective observational study of attending EPs and emergency medicine residents. Physicians were provided patient demographics, chief complaint, and vital sign data and made two assessments on initial presentation: (1) likely disposition (discharge vs admission) and (2) "sick" vs "not-sick". A patient was adjudicated as sick if he/she had a disease process that was potentially life or limb threatening based on pre-defined operational, financial, or educationally derived criteria. We obtained 266 observations in 178 different patients. Physicians predicted patient disposition with the following performance: sensitivity 87.7% (95% CI 81.4-92.1), specificity 65.0% (95% CI 56.1-72.9), LR+ 2.51 (95% CI 1.95-3.22), LR- 0.19 (95% CI 0.12-0.30). For the sick vs not-sick assessment, providers had the following performance: sensitivity 66.2% (95% CI 55.1-75.8), specificity 88.4% (95% CI 83.0-92.2), LR+ 5.69 (95% CI 3.72-8.69), LR- 0.38 (95% CI 0.28-0.53). EPs are able to accurately predict the disposition of ED patients using system 1 diagnostic reasoning based on minimal available information. However, the prognostic accuracy of acuity prediction was limited. © 2013.
Remission and incidence of obstructive sleep apnea from middle childhood to late adolescence.
Spilsbury, James C; Storfer-Isser, Amy; Rosen, Carol L; Redline, Susan
2015-01-01
To study the incidence, remission, and prediction of obstructive sleep apnea (OSA) from middle childhood to late adolescence. Longitudinal analysis. The Cleveland Children's Sleep and Health Study, an ethnically mixed, urban, community-based cohort, followed 8 y. There were 490 participants with overnight polysomnography data available at ages 8-11 and 16-19 y. Baseline participant characteristics and health history were ascertained from parent report and US census data. OSA was defined as an obstructive apnea- hypopnea index ≥ 5 or an obstructive apnea index ≥ 1. OSA prevalence was approximately 4% at each examination, but OSA largely did not persist from middle childhood to late adolescence. Habitual snoring and obesity predicted OSA in cross-sectional analyses at each time point. Residence in a disadvantaged neighborhood, African-American race, and premature birth also predicted OSA in middle childhood, whereas male sex, high body mass index, and history of tonsillectomy or adenoidectomy were risk factors among adolescents. Obesity, but not habitual snoring, in middle childhood predicted adolescent OSA. Because OSA in middle childhood usually remitted by adolescence and most adolescent cases were incident cases, criteria other than concern alone over OSA persistence or incidence should be used when making treatment decisions for pediatric OSA. Moreover, OSA's distinct risk factors at each time point underscore the need for alternative risk-factor assessments across pediatric ages. The greater importance of middle childhood obesity compared to snoring in predicting adolescent OSA provides support for screening, preventing, and treating obesity in childhood. © 2014 Associated Professional Sleep Societies, LLC.
van Stiphout, Ruud G P M; Valentini, Vincenzo; Buijsen, Jeroen; Lammering, Guido; Meldolesi, Elisa; van Soest, Johan; Leccisotti, Lucia; Giordano, Alessandro; Gambacorta, Maria A; Dekker, Andre; Lambin, Philippe
2014-11-01
To develop and externally validate a predictive model for pathologic complete response (pCR) for locally advanced rectal cancer (LARC) based on clinical features and early sequential (18)F-FDG PETCT imaging. Prospective data (i.a. THUNDER trial) were used to train (N=112, MAASTRO Clinic) and validate (N=78, Università Cattolica del S. Cuore) the model for pCR (ypT0N0). All patients received long-course chemoradiotherapy (CRT) and surgery. Clinical parameters were age, gender, clinical tumour (cT) stage and clinical nodal (cN) stage. PET parameters were SUVmax, SUVmean, metabolic tumour volume (MTV) and maximal tumour diameter, for which response indices between pre-treatment and intermediate scan were calculated. Using multivariate logistic regression, three probability groups for pCR were defined. The pCR rates were 21.4% (training) and 23.1% (validation). The selected predictive features for pCR were cT-stage, cN-stage, response index of SUVmean and maximal tumour diameter during treatment. The models' performances (AUC) were 0.78 (training) and 0.70 (validation). The high probability group for pCR resulted in 100% correct predictions for training and 67% for validation. The model is available on the website www.predictcancer.org. The developed predictive model for pCR is accurate and externally validated. This model may assist in treatment decisions during CRT to select complete responders for a wait-and-see policy, good responders for extra RT boost and bad responders for additional chemotherapy. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
How Big Is It Really? Assessing the Efficacy of Indirect Estimates of Body Size in Asian Elephants
Chapman, Simon N.; Mumby, Hannah S.; Crawley, Jennie A. H.; Mar, Khyne U.; Htut, Win; Thura Soe, Aung; Aung, Htoo Htoo; Lummaa, Virpi
2016-01-01
Information on an organism’s body size is pivotal in understanding its life history and fitness, as well as helping inform conservation measures. However, for many species, particularly large-bodied wild animals, taking accurate body size measurements can be a challenge. Various means to estimate body size have been employed, from more direct methods such as using photogrammetry to obtain height or length measurements, to indirect prediction of weight using other body morphometrics or even the size of dung boli. It is often unclear how accurate these measures are because they cannot be compared to objective measures. Here, we investigate how well existing estimation equations predict the actual body weight of Asian elephants Elephas maximus, using body measurements (height, chest girth, length, foot circumference and neck circumference) taken directly from a large population of semi-captive animals in Myanmar (n = 404). We then define new and better fitting formulas to predict body weight in Myanmar elephants from these readily available measures. We also investigate whether the important parameters height and chest girth can be estimated from photographs (n = 151). Our results show considerable variation in the ability of existing estimation equations to predict weight, and that the equations proposed in this paper predict weight better in almost all circumstances. We also find that measurements from standardised photographs reflect body height and chest girth after applying minor adjustments. Our results have implications for size estimation of large wild animals in the field, as well as for management in captive settings. PMID:26938085
Oberg, T
2007-01-01
The vapour pressure is the most important property of an anthropogenic organic compound in determining its partitioning between the atmosphere and the other environmental media. The enthalpy of vaporisation quantifies the temperature dependence of the vapour pressure and its value around 298 K is needed for environmental modelling. The enthalpy of vaporisation can be determined by different experimental methods, but estimation methods are needed to extend the current database and several approaches are available from the literature. However, these methods have limitations, such as a need for other experimental results as input data, a limited applicability domain, a lack of domain definition, and a lack of predictive validation. Here we have attempted to develop a quantitative structure-property relationship (QSPR) that has general applicability and is thoroughly validated. Enthalpies of vaporisation at 298 K were collected from the literature for 1835 pure compounds. The three-dimensional (3D) structures were optimised and each compound was described by a set of computationally derived descriptors. The compounds were randomly assigned into a calibration set and a prediction set. Partial least squares regression (PLSR) was used to estimate a low-dimensional QSPR model with 12 latent variables. The predictive performance of this model, within the domain of application, was estimated at n=560, q2Ext=0.968 and s=0.028 (log transformed values). The QSPR model was subsequently applied to a database of 100,000+ structures, after a similar 3D optimisation and descriptor generation. Reliable predictions can be reported for compounds within the previously defined applicability domain.
Type D personality and the development of PTSD symptoms: a prospective study.
Rademaker, Arthur R; van Zuiden, Mirjam; Vermetten, Eric; Geuze, Elbert
2011-05-01
Psychological trauma and prolonged stress may cause mental disorders such as posttraumatic stress disorder (PTSD). Pretrauma personality is an important determinant of posttraumatic adjustment. Specifically, trait neuroticism has been identified as a risk factor for PTSD. Additionally, the combination of high negative affectivity or neuroticism with marked social inhibition or introversion, also called Type D personality (Denollet, 2000), may compose a risk factor for PTSD. There is no research available that examined pretrauma Type D personality in relation to PTSD. The present study examined the predictive validity of the Type D personality construct in a sample of Dutch soldiers. Data were collected prior to and 6 months after military deployment to Afghanistan. Separate multiple regression analyses were performed to examine the predictive validity of Type D personality. First, Type D personality was defined as the interaction between negative affect and social inhibition (Na × Si). In a second analysis, Type D was defined following cutoff criteria recommended by Denollet (2000). Results showed that negative affectivity was a significant predictor of PTSD symptoms. Social inhibition and the interaction Na × Si did not add to the amount of explained variance in postdeployment PTSD scores over the effects of childhood abuse, negative affectivity, and prior psychological symptoms. A second analysis showed that Type D personality (dichotomous) did not add to the amount of explained variance in postdeployment PTSD scores over the effects of childhood abuse, and prior psychological symptoms. Therefore, Type D personality appears to be of limited value to explain development of combat-related PTSD symptoms.
Rochlin, I.; Harding, K.; Ginsberg, H.S.; Campbell, S.R.
2008-01-01
Five years of CDC light trap data from Suffolk County, NY, were analyzed to compare the applicability of human population density (HPD) and land use/cover (LUC) classification systems to describe mosquito abundance and to determine whether certain mosquito species of medical importance tend to be more common in urban (defined by HPD) or residential (defined by LUC) areas. Eleven study sites were categorized as urban or rural using U.S. Census Bureau data and by LUC types using geographic information systems (GISs). Abundance and percent composition of nine mosquito taxa, all known or potential vectors of arboviruses, were analyzed to determine spatial patterns. By HPD definitions, three mosquito species, Aedes canadensis (Theobald), Coquillettidia perturbans (Walker), and Culiseta melanura (Coquillett), differed significantly between habitat types, with higher abundance and percent composition in rural areas. Abundance and percent composition of these three species also increased with freshwater wetland, natural vegetation areas, or a combination when using LUC definitions. Additionally, two species, Ae. canadensis and Cs. melanura, were negatively affected by increased residential area. One species, Aedes vexans (Meigen), had higher percent composition in urban areas. Two medically important taxa, Culex spp. and Aedes triseriatus (Say), were proportionally more prevalent in residential areas by LUC classification, as was Aedes trivittatus (Coquillett). Although HPD classification was readily available and had some predictive value, LUC classification resulted in higher spatial resolution and better ability to develop location specific predictive models.
NASA Astrophysics Data System (ADS)
Soja, Amber; Westberg, David; Stackhouse, Paul, Jr.; McRae, Douglas; Jin, Ji-Zhong; Sukhinin, Anatoly
2010-05-01
Fire is the dominant disturbance that precipitates ecosystem change in boreal regions, and fire is largely under the control of weather and climate. Fire frequency, fire severity, area burned and fire season length are predicted to increase in boreal regions under current climate change scenarios. Therefore, changes in fire regimes have the potential to compel ecological change, moving ecosystems more quickly towards equilibrium with a new climate. The ultimate goal of this research is to assess the viability of large-scale (1°) data to be used to define fire weather danger and fire regimes, so that large-scale data can be confidently used to predict future fire regimes using large-scale fire weather data, like that available from current Intergovernmental Panel on Climate Change (IPCC) climate change scenarios. In this talk, we intent to: (1) evaluate Fire Weather Indices (FWI) derived using reanalysis and interpolated station data; (2) discuss the advantages and disadvantages of using these distinct data sources; and (3) highlight established relationships between large-scale fire weather data, area burned, active fires and ecosystems burned. Specifically, the Canadian Forestry Service (CFS) Fire Weather Index (FWI) will be derived using: (1) NASA Goddard Earth Observing System version 4 (GEOS-4) large-scale reanalysis and NASA Global Precipitation Climatology Project (GPCP) data; and National Climatic Data Center (NCDC) surface station-interpolated data. Requirements of the FWI are local noon surface-level air temperature, relative humidity, wind speed, and daily (noon-noon) rainfall. GEOS-4 reanalysis and NCDC station-interpolated fire weather indices are generally consistent spatially, temporally and quantitatively. Additionally, increased fire activity coincides with increased FWI ratings in both data products. Relationships have been established between large-scale FWI to area burned, fire frequency, ecosystem types, and these can be use to estimate historic and future fire regimes.
dPotFit: A computer program to fit diatomic molecule spectral data to potential energy functions
NASA Astrophysics Data System (ADS)
Le Roy, Robert J.
2017-01-01
This paper describes program dPotFit, which performs least-squares fits of diatomic molecule spectroscopic data consisting of any combination of microwave, infrared or electronic vibrational bands, fluorescence series, and tunneling predissociation level widths, involving one or more electronic states and one or more isotopologs, and for appropriate systems, second virial coefficient data, to determine analytic potential energy functions defining the observed levels and other properties of each state. Four families of analytical potential functions are available for fitting in the current version of dPotFit: the Expanded Morse Oscillator (EMO) function, the Morse/Long-Range (MLR) function, the Double-Exponential/Long-Range (DELR) function, and the 'Generalized Potential Energy Function' (GPEF) of Šurkus, which incorporates a variety of polynomial functional forms. In addition, dPotFit allows sets of experimental data to be tested against predictions generated from three other families of analytic functions, namely, the 'Hannover Polynomial' (or "X-expansion") function, and the 'Tang-Toennies' and Scoles-Aziz 'HFD', exponential-plus-van der Waals functions, and from interpolation-smoothed pointwise potential energies, such as those obtained from ab initio or RKR calculations. dPotFit also allows the fits to determine atomic-mass-dependent Born-Oppenheimer breakdown functions, and singlet-state Λ-doubling, or 2Σ splitting radial strength functions for one or more electronic states. dPotFit always reports both the 95% confidence limit uncertainty and the "sensitivity" of each fitted parameter; the latter indicates the number of significant digits that must be retained when rounding fitted parameters, in order to ensure that predictions remain in full agreement with experiment. It will also, if requested, apply a "sequential rounding and refitting" procedure to yield a final parameter set defined by a minimum number of significant digits, while ensuring no significant loss of accuracy in the predictions yielded by those parameters.
Ramírez, David; Caballero, Julio
2018-04-28
Molecular docking is the most frequently used computational method for studying the interactions between organic molecules and biological macromolecules. In this context, docking allows predicting the preferred pose of a ligand inside a receptor binding site. However, the selection of the “best” solution is not a trivial task, despite the widely accepted selection criterion that the best pose corresponds to the best energy score. Here, several rigid-target docking methods were evaluated on the same dataset with respect to their ability to reproduce crystallographic binding orientations, to test if the best energy score is a reliable criterion for selecting the best solution. For this, two experiments were performed: (A) to reconstruct the ligand-receptor complex by performing docking of the ligand in its own crystal structure receptor (defined as self-docking), and (B) to reconstruct the ligand-receptor complex by performing docking of the ligand in a crystal structure receptor that contains other ligand (defined as cross-docking). Root-mean square deviation (RMSD) was used to evaluate how different the obtained docking orientation is from the corresponding co-crystallized pose of the same ligand molecule. We found that docking score function is capable of predicting crystallographic binding orientations, but the best ranked solution according to the docking energy is not always the pose that reproduces the experimental binding orientation. This happened when self-docking was achieved, but it was critical in cross-docking. Taking into account that docking is typically used with predictive purposes, during cross-docking experiments, our results indicate that the best energy score is not a reliable criterion to select the best solution in common docking applications. It is strongly recommended to choose the best docking solution according to the scoring function along with additional structural criteria described for analogue ligands to assure the selection of a correct docking solution.
Cervero, Miguel; Torres, Rafael; Jusdado, Juan José; Pastor, Susana; Agud, Jose Luis
2016-04-15
To determine the prevalence and types of clinically significant drug-drug interactions (CSDI) in the drug regimens of HIV-infected patients receiving antiretroviral treatment. retrospective review of database. Centre: Hospital Universitario Severo Ochoa, Infectious Unit. one hundred and forty-two participants followed by one of the authors were selected from January 1985 to December 2014. from their outpatient medical records we reviewed information from the last available visit of the participants, in relation to HIV infection, comorbidities, demographics and the drugs that they were receiving; both antiretroviral drugs and drugs not related to HIV infection. We defined CSDI from the information sheet and/or database on antiretroviral drug interactions of the University of Liverpool (http://www.hiv-druginteractions.org) and we developed a diagnostic tool to predict the possibility of CSDI. By multivariate logistic regression analysis and by estimating the diagnostic performance curve obtained, we identified a quick tool to predict the existence of drug interactions. Of 142 patients, 39 (29.11%) had some type of CSDI and in 11.2% 2 or more interactions were detected. In only one patient the combination of drugs was contraindicated (this patient was receiving darunavir/r and quetiapine). In multivariate analyses, predictors of CSDI were regimen type (PI or NNRTI) and the use of 3 or more non-antiretroviral drugs (AUC 0.886, 95% CI 0.828 to 0.944; P=.0001). The risk was 18.55 times in those receiving NNRTI and 27,95 times in those receiving IP compared to those taking raltegravir. Drug interactions, including those defined as clinically significant, are common in HIV-infected patients treated with antiretroviral drugs, and the risk is greater in IP-based regimens. Raltegravir-based prescribing, especially in patients who receive at least 3 non-HIV drugs could avoid interactions. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.
Approaches to setting organism-based ballast water discharge standards
Lee, Henry; Reusser, Deborah A.; Frazier, Melanie
2013-01-01
As a vector by which foreign species invade coastal and freshwater waterbodies, ballast water discharge from ships is recognized as a major environmental threat. The International Maritime Organization (IMO) drafted an international treaty establishing ballast water discharge standards based on the number of viable organisms per volume of ballast discharge for different organism size classes. Concerns that the IMO standards are not sufficiently protective have initiated several state and national efforts in the United States to develop more stringent standards. We evaluated seven approaches to establishing discharge standards for the >50-μm size class: (1) expert opinion/management consensus, (2) zero detectable living organisms, (3) natural invasion rates, (4) reaction–diffusion models, (5) population viability analysis (PVA) models, (6) per capita invasion probabilities (PCIP), and (7) experimental studies. Because of the difficulty in synthesizing scientific knowledge in an unbiased and transparent fashion, we recommend the use of quantitative models instead of expert opinion. The actual organism concentration associated with a “zero detectable organisms” standard is defined by the statistical rigor of its monitoring program; thus it is not clear whether such a standard is as stringent as other standards. For several reasons, the natural invasion rate, reaction–diffusion, and experimental approaches are not considered suitable for generating discharge standards. PVA models can be used to predict the likelihood of establishment of introduced species but are limited by a lack of population vital rates for species characteristic of ballast water discharges. Until such rates become available, PVA models are better suited to evaluate relative efficiency of proposed standards rather than predicting probabilities of invasion. The PCIP approach, which is based on historical invasion rates at a regional scale, appears to circumvent many of the indicated problems, although it may underestimate invasions by asexual and parthenogenic species. Further research is needed to better define propagule dose–responses, densities at which Allee effects occur, approaches to predicting the likelihood of invasion from multi-species introductions, and generation of formal comparisons of approaches using standardized scenarios.
Predicting School Performance with the Early Screening Inventory.
ERIC Educational Resources Information Center
Meisels, Samuel J.; And Others
1984-01-01
Proposes criteria for defining and selecting preschool developmental screening instruments and describes the Early Screening Inventory (ESI), a developmental screening instrument designed to satisfy these criteria. Presents results of several studies demonstrating that the ESI predicts school performance with moderate to excellent accuracy through…
A Vision and Strategy:Predictive Ecotoxicology in the 21st Century
The manuscript provides an introduction and overview for a series of five papers resulting from a SETAC Pellston Workshop titled A Vision and Strategy for Predictive Ecotoxicology in the 21st Century: Defining Adverse Outcome Pathways Associated with Ecological Risk. It proposes...
Predicting oligonucleotide affinity to nucleic acid targets.
Mathews, D H; Burkard, M E; Freier, S M; Wyatt, J R; Turner, D H
1999-01-01
A computer program, OligoWalk, is reported that predicts the equilibrium affinity of complementary DNA or RNA oligonucleotides to an RNA target. This program considers the predicted stability of the oligonucleotide-target helix and the competition with predicted secondary structure of both the target and the oligonucleotide. Both unimolecular and bimolecular oligonucleotide self structure are considered with a user-defined concentration. The application of OligoWalk is illustrated with three comparisons to experimental results drawn from the literature. PMID:10580474
Gorman, Julian; Pearson, Diane; Whitehead, Peter
2008-01-01
Information on distribution and relative abundance of species is integral to sustainable management, especially if they are to be harvested for subsistence or commerce. In northern Australia, natural landscapes are vast, centers of population few, access is difficult, and Aboriginal resource centers and communities have limited funds and infrastructure. Consequently defining distribution and relative abundance by comprehensive ground survey is difficult and expensive. This highlights the need for simple, cheap, automated methodologies to predict the distribution of species in use, or having potential for use, in commercial enterprise. The technique applied here uses a Geographic Information System (GIS) to make predictions of probability of occurrence using an inductive modeling technique based on Bayes' theorem. The study area is in the Maningrida region, central Arnhem Land, in the Northern Territory, Australia. The species examined, Cycas arnhemica and Brachychiton diversifolius, are currently being 'wild harvested' in commercial trials, involving sale of decorative plants and use as carving wood, respectively. This study involved limited and relatively simple ground surveys requiring approximately 7 days of effort for each species. The overall model performance was evaluated using Cohen's kappa statistics. The predictive ability of the model for C. arnhemica was classified as moderate and for B. diversifolius as fair. The difference in model performance can be attributed to the pattern of distribution of these species. C. arnhemica tends to occur in a clumped distribution due to relatively short distance dispersal of its large seeds and vegetative growth from long-lived rhizomes, while B. diversifolius seeds are smaller and more widely dispersed across the landscape. The output from analysis predicts trends in species distribution that are consistent with independent on-site sampling for each species and therefore should prove useful in gauging the extent of resource availability. However, some caution needs to be applied as the models tend to over predict presence which is a function of distribution patterns and of other variables operating in the landscape such as fire histories which were not included in the model due to limited availability of data.
Assessing microscope image focus quality with deep learning.
Yang, Samuel J; Berndl, Marc; Michael Ando, D; Barch, Mariya; Narayanaswamy, Arunachalam; Christiansen, Eric; Hoyer, Stephan; Roat, Chris; Hung, Jane; Rueden, Curtis T; Shankar, Asim; Finkbeiner, Steven; Nelson, Philip
2018-03-15
Large image datasets acquired on automated microscopes typically have some fraction of low quality, out-of-focus images, despite the use of hardware autofocus systems. Identification of these images using automated image analysis with high accuracy is important for obtaining a clean, unbiased image dataset. Complicating this task is the fact that image focus quality is only well-defined in foreground regions of images, and as a result, most previous approaches only enable a computation of the relative difference in quality between two or more images, rather than an absolute measure of quality. We present a deep neural network model capable of predicting an absolute measure of image focus on a single image in isolation, without any user-specified parameters. The model operates at the image-patch level, and also outputs a measure of prediction certainty, enabling interpretable predictions. The model was trained on only 384 in-focus Hoechst (nuclei) stain images of U2OS cells, which were synthetically defocused to one of 11 absolute defocus levels during training. The trained model can generalize on previously unseen real Hoechst stain images, identifying the absolute image focus to within one defocus level (approximately 3 pixel blur diameter difference) with 95% accuracy. On a simpler binary in/out-of-focus classification task, the trained model outperforms previous approaches on both Hoechst and Phalloidin (actin) stain images (F-scores of 0.89 and 0.86, respectively over 0.84 and 0.83), despite only having been presented Hoechst stain images during training. Lastly, we observe qualitatively that the model generalizes to two additional stains, Hoechst and Tubulin, of an unseen cell type (Human MCF-7) acquired on a different instrument. Our deep neural network enables classification of out-of-focus microscope images with both higher accuracy and greater precision than previous approaches via interpretable patch-level focus and certainty predictions. The use of synthetically defocused images precludes the need for a manually annotated training dataset. The model also generalizes to different image and cell types. The framework for model training and image prediction is available as a free software library and the pre-trained model is available for immediate use in Fiji (ImageJ) and CellProfiler.
Crystal structure prediction supported by incomplete experimental data
NASA Astrophysics Data System (ADS)
Tsujimoto, Naoto; Adachi, Daiki; Akashi, Ryosuke; Todo, Synge; Tsuneyuki, Shinji
2018-05-01
We propose an efficient theoretical scheme for structure prediction on the basis of the idea of combining methods, which optimize theoretical calculation and experimental data simultaneously. In this scheme, we formulate a cost function based on a weighted sum of interatomic potential energies and a penalty function which is defined with partial experimental data totally insufficient for conventional structure analysis. In particular, we define the cost function using "crystallinity" formulated with only peak positions within the small range of the x-ray-diffraction pattern. We apply this method to well-known polymorphs of SiO2 and C with up to 108 atoms in the simulation cell and show that it reproduces the correct structures efficiently with very limited information of diffraction peaks. This scheme opens a new avenue for determining and predicting structures that are difficult to determine by conventional methods.
Liseron-Monfils, Christophe; Lewis, Tim; Ashlock, Daniel; McNicholas, Paul D; Fauteux, François; Strömvik, Martina; Raizada, Manish N
2013-03-15
The discovery of genetic networks and cis-acting DNA motifs underlying their regulation is a major objective of transcriptome studies. The recent release of the maize genome (Zea mays L.) has facilitated in silico searches for regulatory motifs. Several algorithms exist to predict cis-acting elements, but none have been adapted for maize. A benchmark data set was used to evaluate the accuracy of three motif discovery programs: BioProspector, Weeder and MEME. Analysis showed that each motif discovery tool had limited accuracy and appeared to retrieve a distinct set of motifs. Therefore, using the benchmark, statistical filters were optimized to reduce the false discovery ratio, and then remaining motifs from all programs were combined to improve motif prediction. These principles were integrated into a user-friendly pipeline for motif discovery in maize called Promzea, available at http://www.promzea.org and on the Discovery Environment of the iPlant Collaborative website. Promzea was subsequently expanded to include rice and Arabidopsis. Within Promzea, a user enters cDNA sequences or gene IDs; corresponding upstream sequences are retrieved from the maize genome. Predicted motifs are filtered, combined and ranked. Promzea searches the chosen plant genome for genes containing each candidate motif, providing the user with the gene list and corresponding gene annotations. Promzea was validated in silico using a benchmark data set: the Promzea pipeline showed a 22% increase in nucleotide sensitivity compared to the best standalone program tool, Weeder, with equivalent nucleotide specificity. Promzea was also validated by its ability to retrieve the experimentally defined binding sites of transcription factors that regulate the maize anthocyanin and phlobaphene biosynthetic pathways. Promzea predicted additional promoter motifs, and genome-wide motif searches by Promzea identified 127 non-anthocyanin/phlobaphene genes that each contained all five predicted promoter motifs in their promoters, perhaps uncovering a broader co-regulated gene network. Promzea was also tested against tissue-specific microarray data from maize. An online tool customized for promoter motif discovery in plants has been generated called Promzea. Promzea was validated in silico by its ability to retrieve benchmark motifs and experimentally defined motifs and was tested using tissue-specific microarray data. Promzea predicted broader networks of gene regulation associated with the historic anthocyanin and phlobaphene biosynthetic pathways. Promzea is a new bioinformatics tool for understanding transcriptional gene regulation in maize and has been expanded to include rice and Arabidopsis.
Evaluation and integration of existing methods for computational prediction of allergens
2013-01-01
Background Allergy involves a series of complex reactions and factors that contribute to the development of the disease and triggering of the symptoms, including rhinitis, asthma, atopic eczema, skin sensitivity, even acute and fatal anaphylactic shock. Prediction and evaluation of the potential allergenicity is of importance for safety evaluation of foods and other environment factors. Although several computational approaches for assessing the potential allergenicity of proteins have been developed, their performance and relative merits and shortcomings have not been compared systematically. Results To evaluate and improve the existing methods for allergen prediction, we collected an up-to-date definitive dataset consisting of 989 known allergens and massive putative non-allergens. The three most widely used allergen computational prediction approaches including sequence-, motif- and SVM-based (Support Vector Machine) methods were systematically compared using the defined parameters and we found that SVM-based method outperformed the other two methods with higher accuracy and specificity. The sequence-based method with the criteria defined by FAO/WHO (FAO: Food and Agriculture Organization of the United Nations; WHO: World Health Organization) has higher sensitivity of over 98%, but having a low specificity. The advantage of motif-based method is the ability to visualize the key motif within the allergen. Notably, the performances of the sequence-based method defined by FAO/WHO and motif eliciting strategy could be improved by the optimization of parameters. To facilitate the allergen prediction, we integrated these three methods in a web-based application proAP, which provides the global search of the known allergens and a powerful tool for allergen predication. Flexible parameter setting and batch prediction were also implemented. The proAP can be accessed at http://gmobl.sjtu.edu.cn/proAP/main.html. Conclusions This study comprehensively evaluated sequence-, motif- and SVM-based computational prediction approaches for allergens and optimized their parameters to obtain better performance. These findings may provide helpful guidance for the researchers in allergen-prediction. Furthermore, we integrated these methods into a web application proAP, greatly facilitating users to do customizable allergen search and prediction. PMID:23514097
Evaluation and integration of existing methods for computational prediction of allergens.
Wang, Jing; Yu, Yabin; Zhao, Yunan; Zhang, Dabing; Li, Jing
2013-01-01
Allergy involves a series of complex reactions and factors that contribute to the development of the disease and triggering of the symptoms, including rhinitis, asthma, atopic eczema, skin sensitivity, even acute and fatal anaphylactic shock. Prediction and evaluation of the potential allergenicity is of importance for safety evaluation of foods and other environment factors. Although several computational approaches for assessing the potential allergenicity of proteins have been developed, their performance and relative merits and shortcomings have not been compared systematically. To evaluate and improve the existing methods for allergen prediction, we collected an up-to-date definitive dataset consisting of 989 known allergens and massive putative non-allergens. The three most widely used allergen computational prediction approaches including sequence-, motif- and SVM-based (Support Vector Machine) methods were systematically compared using the defined parameters and we found that SVM-based method outperformed the other two methods with higher accuracy and specificity. The sequence-based method with the criteria defined by FAO/WHO (FAO: Food and Agriculture Organization of the United Nations; WHO: World Health Organization) has higher sensitivity of over 98%, but having a low specificity. The advantage of motif-based method is the ability to visualize the key motif within the allergen. Notably, the performances of the sequence-based method defined by FAO/WHO and motif eliciting strategy could be improved by the optimization of parameters. To facilitate the allergen prediction, we integrated these three methods in a web-based application proAP, which provides the global search of the known allergens and a powerful tool for allergen predication. Flexible parameter setting and batch prediction were also implemented. The proAP can be accessed at http://gmobl.sjtu.edu.cn/proAP/main.html. This study comprehensively evaluated sequence-, motif- and SVM-based computational prediction approaches for allergens and optimized their parameters to obtain better performance. These findings may provide helpful guidance for the researchers in allergen-prediction. Furthermore, we integrated these methods into a web application proAP, greatly facilitating users to do customizable allergen search and prediction.
Linear array ultrasonography to stage rectal neoplasias suitable for local treatment.
Ravizza, Davide; Tamayo, Darina; Fiori, Giancarla; Trovato, Cristina; De Roberto, Giuseppe; de Leone, Annalisa; Crosta, Cristiano
2011-08-01
Because of the many therapeutic options available, a reliable staging is crucial for rectal neoplasia management. Adenomas and cancers limited to the submucosa without lymph node involvement may be treated locally. The aim of this study is to evaluate the diagnostic accuracy of endorectal ultrasonography in the staging of neoplasias suitable for local treatment. We considered all patients who underwent endorectal ultrasonography between 2001 and 2010. The study population consisted of 92 patients with 92 neoplasias (68 adenocarcinomas and 24 adenomas). A 5 and 7.5MHz linear array echoendoscope was used. The postoperative histopathologic result was compared with the preoperative staging defined by endorectal ultrasonography. Adenomas and cancers limited to the submucosa were considered together (pT0-1). The sensitivity, specificity, overall accuracy rate, positive predictive value, and negative predictive value of endorectal ultrasonography for pT0-1 were 86%, 95.6%, 91.3%, 94.9% and 88.7%. Those for nodal involvement were 45.4%, 95.5%, 83%, 76.9% and 84%, with 3 false positive results and 12 false negative. For combined pT0-1 and pN0, endorectal ultrasonography showed an 87.5% sensitivity, 95.9% specificity, 92% overall accuracy rate, 94.9% positive predictive value and 90.2% negative predictive value. Endorectal linear array ultrasonography is a reliable tool to detect rectal neoplasias suitable for local treatment. Copyright © 2011 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Castedo, Ricardo; de la Vega-Panizo, Rogelio; Fernández-Hernández, Marta; Paredes, Carlos
2015-02-01
A key requirement for effective coastal zone management is good knowledge of historical rates of change and the ability to predict future shoreline evolution, especially for rapidly eroding areas. Historical shoreline recession analysis was used for the prediction of future cliff shoreline positions along a section of 9 km between Bridlington and Hornsea, on the northern area of the Holderness Coast, UK. The analysis was based on historical maps and aerial photographs dating from 1852 to 2011 using the Digital Shoreline Analysis System (DSAS) 4.3, extension of ESRI's ArcInfo 10.×. The prediction of future shorelines was performed for the next 40 years using a variety of techniques, ranging from extrapolation from historical data, geometric approaches like the historical trend analysis, to a process-response numerical model that incorporates physically-based equations and geotechnical stability analysis. With climate change and sea-level rise implying that historical rates of change may not be a reliable guide for the future, enhanced visualization of the evolving coastline has the potential to improve awareness of these changing conditions. Following the IPCC, 2013 report, two sea-level rise rates, 2 mm/yr and 6 mm/yr, have been used to estimate future shoreline conditions. This study illustrated that good predictive models, once their limitations are estimated or at least defined, are available for use by managers, planners, engineers, scientists and the public to make better decisions regarding coastal management, development, and erosion-control strategies.
Stuart, K; Adderley, N J; Marshall, T; Rayman, G; Sitch, A; Manley, S; Ghosh, S; Toulis, K A; Nirantharakumar, K
2017-10-01
To explore whether a quantitative approach to identifying hospitalized patients with diabetes at risk of hypoglycaemia would be feasible through incorporation of routine biochemical, haematological and prescription data. A retrospective cross-sectional analysis of all diabetic admissions (n=9584) from 1 January 2014 to 31 December 2014 was performed. Hypoglycaemia was defined as a blood glucose level of <4 mmol/l. The prediction model was constructed using multivariable logistic regression, populated by clinically important variables and routine laboratory data. Using a prespecified variable selection strategy, it was shown that the occurrence of inpatient hypoglycaemia could be predicted by a combined model taking into account background medication (type of insulin, use of sulfonylureas), ethnicity (black and Asian), age (≥75 years), type of admission (emergency) and laboratory measurements (estimated GFR, C-reactive protein, sodium and albumin). Receiver-operating curve analysis showed that the area under the curve was 0.733 (95% CI 0.719 to 0.747). The threshold chosen to maximize both sensitivity and specificity was 0.15. The area under the curve obtained from internal validation did not differ from the primary model [0.731 (95% CI 0.717 to 0.746)]. The inclusion of routine biochemical data, available at the time of admission, can add prognostic value to demographic and medication history. The predictive performance of the constructed model indicates potential clinical utility for the identification of patients at risk of hypoglycaemia during their inpatient stay. © 2017 Diabetes UK.
Astuti, Dewi; Sabir, Ataf; Fulton, Piers; Zatyka, Malgorzata; Williams, Denise; Hardy, Carol; Milan, Gabriella; Favaretto, Francesca; Yu‐Wai‐Man, Patrick; Rohayem, Julia; López de Heredia, Miguel; Hershey, Tamara; Tranebjaerg, Lisbeth; Chen, Jian‐Hua; Chaussenot, Annabel; Nunes, Virginia; Marshall, Bess; McAfferty, Susan; Tillmann, Vallo; Maffei, Pietro; Paquis‐Flucklinger, Veronique; Geberhiwot, Tarekign; Mlynarski, Wojciech; Parkinson, Kay; Picard, Virginie; Bueno, Gema Esteban; Dias, Renuka; Arnold, Amy; Richens, Caitlin; Paisey, Richard; Urano, Fumihiko; Semple, Robert; Sinnott, Richard
2017-01-01
Abstract We developed a variant database for diabetes syndrome genes, using the Leiden Open Variation Database platform, containing observed phenotypes matched to the genetic variations. We populated it with 628 published disease‐associated variants (December 2016) for: WFS1 (n = 309), CISD2 (n = 3), ALMS1 (n = 268), and SLC19A2 (n = 48) for Wolfram type 1, Wolfram type 2, Alström, and Thiamine‐responsive megaloblastic anemia syndromes, respectively; and included 23 previously unpublished novel germline variants in WFS1 and 17 variants in ALMS1. We then investigated genotype–phenotype relations for the WFS1 gene. The presence of biallelic loss‐of‐function variants predicted Wolfram syndrome defined by insulin‐dependent diabetes and optic atrophy, with a sensitivity of 79% (95% CI 75%–83%) and specificity of 92% (83%–97%). The presence of minor loss‐of‐function variants in WFS1 predicted isolated diabetes, isolated deafness, or isolated congenital cataracts without development of the full syndrome (sensitivity 100% [93%–100%]; specificity 78% [73%–82%]). The ability to provide a prognostic prediction based on genotype will lead to improvements in patient care and counseling. The development of the database as a repository for monogenic diabetes gene variants will allow prognostic predictions for other diabetes syndromes as next‐generation sequencing expands the repertoire of genotypes and phenotypes. The database is publicly available online at https://lovd.euro-wabb.org. PMID:28432734
Prediction of the Length of Upcoming Solar Cycles
NASA Astrophysics Data System (ADS)
Kakad, Bharati; Kakad, Amar; Ramesh, Durbha Sai
2017-12-01
The forecast of solar cycle (SC) characteristics is crucial particularly for several space-based missions. In the present study, we propose a new model for predicting the length of the SC. The model uses the information of the width of an autocorrelation function that is derived from the daily sunspot data for each SC. We tested the model on Versions 1 and 2 of the daily international sunspot number data for SCs 10 - 24. We found that the autocorrelation width Aw n of SC n during the second half of its ascending phase correlates well with the modified length that is defined as T_{cy}^{n+2} - Tan. Here T_{cy}^{n+2} and T_{ a}n are the length and ascent time of SCs n+2 and n, respectively. The estimated correlation coefficient between the model parameters is 0.93 (0.91) for Version 1 (Version 2) sunspot series. The standard errors in the observed and predicted lengths of the SCs for Version 1 and Version 2 data are 0.38 and 0.44 years, respectively. The advantage of the proposed model is that the predictions of the length of the upcoming two SCs ( i.e., n+1, n+2) are readily available at the time of the peak of SC n. The present model gives a forecast of 11.01, 10.52, and 11.91 years (11.01, 12.20, and 11.68 years) for the length of SCs 24, 25, and 26, respectively, for Version 1 (Version 2).
Admission factors can predict the need for ICU monitoring in gallstone pancreatitis.
Arnell, T D; de Virgilio, C; Chang, L; Bongard, F; Stabile, B E
1996-10-01
The purpose was 1) to prospectively determine the prevalence of adverse events necessitating intensive care unit (ICU) monitoring in gallstone pancreatitis (GP) and 2) To identify admission prognostic indicators that predict the need for ICU unit monitoring. Prospective laboratory data, physiologic parameters, and APACHE II scores were gathered on 102 patients with GP over 14 months. Adverse events were defined as cardiac, respiratory, or renal failure, gastrointestinal bleeding, stroke, sepsis, and necrotizing pancreatitis. Patients were divided into Group 1 (no adverse events, n=95) and Group 2 (adverse events, n=7). There were no deaths and 7 (7%) adverse events, including necrotizing pancreatitis (3), cholangitis (2), and cardiac (2). APACHE 11 > or = 5 (P < 0.005), blood urea nitrogen (BUN) > or = 12 mmol/L (P < 0.005), white blood cell count (WBC) > or = 14.5 x 10(9)/L, (P < 0.001), heart rate > or = 100 bpm (P < 0.001), and glucose > or = 150 mg/dL (P < 0.005) were each independent predictors of adverse events. The sensitivity and specificity of these criteria for predicting severe complications requiring ICU care varied from 71 to 86 per cent and 78 to 87 per cent, respectively. The prevalence of adverse events necessitating ICU care in GP patients is low. Glucose, BUN, WBC, heart rate, and APACHE II scores are independent predictors of adverse events necessitating ICU care. Single criteria predicting the need for ICU care on admission are readily available on admission.
Precise leveling, space geodesy and geodynamics
NASA Technical Reports Server (NTRS)
Reilinger, R.
1981-01-01
The implications of currently available leveling data on understanding the crustal dynamics of the continental United States are investigated. Neotectonic deformation, near surface movements, systematic errors in releveling measurements, and the implications of this information for earthquake prediction are described. Vertical crustal movements in the vicinity of the 1931 Valentine, Texas, earthquake which may represent coseismic deformation are investigated. The detection of vertical fault displacements by precise leveling in western Kentucky is reported. An empirical basis for defining releveling anomalies and its implications for crustal deformation in southern California is presented. Releveling measurements in the eastern United States and their meaning in the context of possible crustal deformation, including uplift of the Appalachian Mountains, eastward tilting of the Atlantic Coastal Plain, and apparent movements associated with a number of structural features along the east coast, are reported.
Realistic wave-optics simulation of X-ray phase-contrast imaging at a human scale
Sung, Yongjin; Segars, W. Paul; Pan, Adam; Ando, Masami; Sheppard, Colin J. R.; Gupta, Rajiv
2015-01-01
X-ray phase-contrast imaging (XPCI) can dramatically improve soft tissue contrast in X-ray medical imaging. Despite worldwide efforts to develop novel XPCI systems, a numerical framework to rigorously predict the performance of a clinical XPCI system at a human scale is not yet available. We have developed such a tool by combining a numerical anthropomorphic phantom defined with non-uniform rational B-splines (NURBS) and a wave optics-based simulator that can accurately capture the phase-contrast signal from a human-scaled numerical phantom. Using a synchrotron-based, high-performance XPCI system, we provide qualitative comparison between simulated and experimental images. Our tool can be used to simulate the performance of XPCI on various disease entities and compare proposed XPCI systems in an unbiased manner. PMID:26169570
An analytical study of electric vehicle handling dynamics
NASA Technical Reports Server (NTRS)
Greene, J. E.; Segal, D. J.
1979-01-01
Hypothetical electric vehicle configurations were studied by applying available analytical methods. Elementary linearized models were used in addition to a highly sophisticated vehicle dynamics computer simulation technique. Physical properties of specific EV's were defined for various battery and powertrain packaging approaches applied to a range of weight distribution and inertial properties which characterize a generic class of EV's. Computer simulations of structured maneuvers were performed for predicting handling qualities in the normal driving range and during various extreme conditions related to accident avoidance. Results indicate that an EV with forward weight bias will possess handling qualities superior to a comparable EV that is rear-heavy or equally balanced. The importance of properly matching tires, suspension systems, and brake system front/rear torque proportioning to a given EV configuration during the design stage is demonstrated.
Realistic wave-optics simulation of X-ray phase-contrast imaging at a human scale
NASA Astrophysics Data System (ADS)
Sung, Yongjin; Segars, W. Paul; Pan, Adam; Ando, Masami; Sheppard, Colin J. R.; Gupta, Rajiv
2015-07-01
X-ray phase-contrast imaging (XPCI) can dramatically improve soft tissue contrast in X-ray medical imaging. Despite worldwide efforts to develop novel XPCI systems, a numerical framework to rigorously predict the performance of a clinical XPCI system at a human scale is not yet available. We have developed such a tool by combining a numerical anthropomorphic phantom defined with non-uniform rational B-splines (NURBS) and a wave optics-based simulator that can accurately capture the phase-contrast signal from a human-scaled numerical phantom. Using a synchrotron-based, high-performance XPCI system, we provide qualitative comparison between simulated and experimental images. Our tool can be used to simulate the performance of XPCI on various disease entities and compare proposed XPCI systems in an unbiased manner.
TSAPA: identification of tissue-specific alternative polyadenylation sites in plants.
Ji, Guoli; Chen, Moliang; Ye, Wenbin; Zhu, Sheng; Ye, Congting; Su, Yaru; Peng, Haonan; Wu, Xiaohui
2018-06-15
Alternative polyadenylation (APA) is now emerging as a widespread mechanism modulated tissue-specifically, which highlights the need to define tissue-specific poly(A) sites for profiling APA dynamics across tissues. We have developed an R package called TSAPA based on the machine learning model for identifying tissue-specific poly(A) sites in plants. A feature space including more than 200 features was assembled to specifically characterize poly(A) sites in plants. The classification model in TSAPA can be customized by selecting desirable features or classifiers. TSAPA is also capable of predicting tissue-specific poly(A) sites in unannotated intergenic regions. TSAPA will be a valuable addition to the community for studying dynamics of APA in plants. https://github.com/BMILAB/TSAPA. Supplementary data are available at Bioinformatics online.
THe high altitude reconnaissance platform (HARP) and its capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rusk, D.; Rose, R.L.; Gibeau, E.
1996-10-01
The High Altitude Reconnaissance Platform (HARP), a Learjet 36A, is a multi-purpose, long-range, high-altitude aircraft specially modified to serve as a meteorological observation platform. Its instrument suite includes: particle probes, Ka-band radar, two-color lidar, infrared spectroradiometer, thermometer, hygrometer, liquid water probe, and a gust probe. Aeromet scientists have developed software and hardware systems that combine data using sensor fusion concepts, providing detailed environmental information. The HARP answers the need for defining and predicting meteorological conditions throughout large atmospheric volumes particularly in areas where conventional surface and upper-air observations are not available. It also fills the need for gathering and predictingmore » meteorological conditions along an optical sensor`s line of sight or a missile`s reentry path. 6 refs., 2 figs., 4 tabs.« less
Roh, Kyung-Ho; Nerem, Robert M; Roy, Krishnendu
2016-06-07
Stem cells and other functionally defined therapeutic cells (e.g., T cells) are promising to bring hope of a permanent cure for diseases and disorders that currently cannot be cured by conventional drugs or biological molecules. This paradigm shift in modern medicine of using cells as novel therapeutics can be realized only if suitable manufacturing technologies for large-scale, cost-effective, reproducible production of high-quality cells can be developed. Here we review the state of the art in therapeutic cell manufacturing, including cell purification and isolation, activation and differentiation, genetic modification, expansion, packaging, and preservation. We identify current challenges and discuss opportunities to overcome them such that cell therapies become highly effective, safe, and predictively reproducible while at the same time becoming affordable and widely available.
Microcirculation and the physiome projects.
Bassingthwaighte, James B
2008-11-01
The Physiome projects comprise a loosely knit worldwide effort to define the Physiome through databases and theoretical models, with the goal of better understanding the integrative functions of cells, organs, and organisms. The projects involve developing and archiving models, providing centralized databases, and linking experimental information and models from many laboratories into self-consistent frameworks. Increasingly accurate and complete models that embody quantitative biological hypotheses, adhere to high standards, and are publicly available and reproducible, together with refined and curated data, will enable biological scientists to advance integrative, analytical, and predictive approaches to the study of medicine and physiology. This review discusses the rationale and history of the Physiome projects, the role of theoretical models in the development of the Physiome, and the current status of efforts in this area addressing the microcirculation.
Weiden, Michael D.; Naveed, Bushra; Kwon, Sophia; Cho, Soo Jung; Comfort, Ashley L.; Prezant, David J.; Rom, William N.; Nolan, Anna
2013-01-01
Pulmonary vascular loss is an early feature of chronic obstructive pulmonary disease. Biomarkers of inflammation and of metabolic syndrome, predicts loss of lung function in World Trade Center Lung Injury (WTC-LI). We investigated if other cardiovascular disease (CVD) biomarkers also predicted WTC-LI. This nested case-cohort study used 801 never smoker, WTC exposed firefighters with normal pre-9/11 lung function presenting for subspecialty pulmonary evaluation (SPE) before March, 2008. A representative sub-cohort of 124/801 with serum drawn within six months of 9/11 defined CVD biomarker distribution. Post-9/11/01 FEV1 at subspecialty exam defined cases: susceptible WTC-LI cases with FEV1≤77% predicted (66/801) and resistant WTC-LI cases with FEV1≥107% (68/801). All models were adjusted for WTC exposure intensity, BMI at SPE, age at 9/11, and pre-9/11 FEV1. Susceptible WTC-LI cases had higher levels of Apo-AII, CRP, and MIP-4 with significant RRs of 3.85, 3.93, and 0.26 respectively with an area under the curve (AUC) of 0.858. Resistant WTC-LI cases had significantly higher sVCAM and lower MPO with RRs of 2.24, and 2.89 respectively; AUC 0.830. Biomarkers of CVD in serum six-month post-9/11 predicted either susceptibility or resistance to WTC-LI. These biomarkers may define pathways producing or protecting subjects from pulmonary vascular disease and associated loss of lung function after an irritant exposure. PMID:22903969
Predicting Parental Home and School Involvement in High School African American Adolescents
ERIC Educational Resources Information Center
Hayes, DeMarquis
2011-01-01
Predictors of parental home and school involvement for high school adolescents were examined within two groups of urban African American parents from various socioeconomic levels. Home involvement was defined as parent-adolescent communication about school and learning, while school involvement was defined in terms of parent attendance and…
Individual Differences in Anatomy Predict Reading and Oral Language Impairments in Children
ERIC Educational Resources Information Center
Leonard, Christiana; Eckert, Mark; Given, Barbara; Virginia, Berninger; Eden, Guinevere
2006-01-01
Developmental dyslexia (DD) and specific language impairment (SLI) are disorders of language that differ in diagnostic criteria and outcome. DD is defined by isolated reading deficits. SLI is defined by poor receptive and expressive oral language skills. Reading deficits, although prevalent, are not necessary for the diagnosis of SLI. An enduring…
ERIC Educational Resources Information Center
Doabler, Christian T.; Nelson-Walker, Nancy; Kosty, Derek; Baker, Scott K.; Smolkowski, Keith; Fien, Hank
2013-01-01
In this study, the authors conceptualize teaching episodes such as an integrated set of observable student-teacher interactions. Instructional interactions that take place between teachers and students around critical academic content are a defining characteristic of classroom instruction and a component carefully defined in many education…
David L. Loftis
1991-01-01
Classical concepts of post-disturbance succession through well-defined sera1 stages to a well-defined climax stage(s) are not ausefulconceptual framework for predicting species composition of regeneration resulting from the application of regeneration treatmentsin complex southern hardwood forests. Hardwood regeneration can be better understood, and more useful...
Mythologizing Change: Examining Rhetorical Myth as a Strategic Change Management Discourse
ERIC Educational Resources Information Center
Rawlins, Jacob D.
2014-01-01
This article explores how rhetorical myth can be used as a tool for persuading employees to accept change and to maintain consensus during the process. It defines rhetorical myth using three concepts: "chronographia" (a rhetorical interpretation of history), epideictic prediction (defining a present action by assigning praise and blame…
Vertical integration and diversification of acute care hospitals: conceptual definitions.
Clement, J P
1988-01-01
The terms vertical integration and diversification, although used quite frequently, are ill-defined for use in the health care field. In this article, the concepts are defined--specifically for nonuniversity acute care hospitals. The resulting definitions are more useful than previous ones for predicting the effects of vertical integration and diversification.
Predicting features of breast cancer with gene expression patterns.
Lu, Xuesong; Lu, Xin; Wang, Zhigang C; Iglehart, J Dirk; Zhang, Xuegong; Richardson, Andrea L
2008-03-01
Data from gene expression arrays hold an enormous amount of biological information. We sought to determine if global gene expression in primary breast cancers contained information about biologic, histologic, and anatomic features of the disease in individual patients. Microarray data from the tumors of 129 patients were analyzed for the ability to predict biomarkers [estrogen receptor (ER) and HER2], histologic features [grade and lymphatic-vascular invasion (LVI)], and stage parameters (tumor size and lymph node metastasis). Multiple statistical predictors were used and the prediction accuracy was determined by cross-validation error rate; multidimensional scaling (MDS) allowed visualization of the predicted states under study. Models built from gene expression data accurately predict ER and HER2 status, and divide tumor grade into high-grade and low-grade clusters; intermediate-grade tumors are not a unique group. In contrast, gene expression data is inaccurate at predicting tumor size, lymph node status or LVI. The best model for prediction of nodal status included tumor size, LVI status and pathologically defined tumor subtype (based on combinations of ER, HER2, and grade); the addition of microarray-based prediction to this model failed to improve the prediction accuracy. Global gene expression supports a binary division of ER, HER2, and grade, clearly separating tumors into two categories; intermediate values for these bio-indicators do not define intermediate tumor subsets. Results are consistent with a model of regional metastasis that depends on inherent biologic differences in metastatic propensity between breast cancer subtypes, upon which time and chance then operate.
Bruce, David G; Davis, Wendy A; Dragovic, Milan; Davis, Timothy M E; Starkstein, Sergio E
2016-10-01
The aims were to determine whether anxious depression, defined by latent class analysis (LCA), predicts cardiovascular outcomes in type 2 diabetes and to compare the predictive power of anxious depression with Diagnostic & Statistical Manual Versions IV and 5 (DSM-IV/5) categories of depression and generalized anxiety disorder (GAD). Prospective observational study of 1,337 type 2 participants. Baseline assessment with the 9-item Patient Health Questionnaire and the GAD Scale; LCA-defined groups with minor or major anxious depression based on anxiety and depression symptoms. Cox modeling used to compare the independent impact of: (1) LCA anxious depression, (2) DSM-IV/5 depression, (3) GAD on incident cardiovascular events and deaths after 4 years. LCA minor and major anxious depression was present in 21.9 and 7.8% of participants, respectively, DSM-IV/5 minor and major depression in 6.2 and 6.1%, respectively, and GAD in 4.8%. There were 110 deaths, 31 cardiovascular deaths, and 199 participants had incident cardiovascular events. In adjusted models, minor anxious depression (Hazard ratio (95% confidence intervals): 1.70 (1.15-2.50)) and major anxious depression (1.90 (1.11-3.25)) predicted incident cardiovascular events and major anxious depression also predicted cardiovascular mortality (4.32 (1.35-13.86)). By comparison, incident cardiovascular events were predicted by DSM-IV/5 major depression (2.10 (1.22-3.62)) only and cardiovascular mortality was predicted by both DSM-IV/5 major depression (3.56 (1.03-12.35)) and GAD (5.92 (1.84-19.08)). LCA-defined anxious depression is more common than DSM-IV/5 categories and is a strong predictor of cardiovascular outcomes in type 2 diabetes. These data suggest that this diagnostic scheme has predictive validity and clinical relevance. © 2016 Wiley Periodicals, Inc.
Perceived freedom of choice is associated with neural encoding of option availability.
Rens, Natalie; Bode, Stefan; Cunnington, Ross
2018-05-03
Freedom of choice has been defined as the opportunity to choose alternative plans of action. In this fMRI study, we investigated how the perceived freedom of choice and the underlying neural correlates are influenced by the availability of options. Participants made an initial free choice between left or right doors before beginning a virtual walk along a corridor. At the mid-point of the corridor, lock cues appeared to reveal whether one or both doors remained available, requiring participants either to select a particular door or allowing them to freely choose to stay or switch their choice. We found that participants rated trials as free when they were able to carry out their initial choice, but even more so when both doors remained available. Multi-voxel pattern analysis showed that upcoming choices could initially be decoded from visual cortices before the appearance of the lock cues, and additionally from the motor cortex after the lock cues had confirmed which doors were open. When participants were able to maintain the same choice that they originally selected, the availability of alternative options was represented in fine-grained patterns of activity in the dorsolateral prefrontal cortex. Further, decoding accuracy in this region correlated with the subjective level of freedom that participants reported. These results suggest that there is neural encoding of the availability of alternative options in the dorsolateral prefrontal cortex, and the degree of this encoding predicts an individual's perceived freedom of choice. Copyright © 2018 Elsevier Inc. All rights reserved.
Ellingson, Benjamin M.; Lai, Albert; Nguyen, Huytram N.; Nghiemphu, Phioanh L.; Pope, Whitney B.; Cloughesy, Timothy F.
2015-01-01
Purpose Evaluation of nonenhancing tumor (NET) burden is an important, yet challenging part of brain tumor response assessment. The current study focuses on using dual echo turbo spin echo MRI as a means of quickly estimating tissue T2, which can be used to objectively define NET burden. Experimental Design A series of experiments were performed to establish the use of T2 maps for defining NET burden. First, variation in T2 was determined using ACR water phantoms in 16 scanners evaluated over 3 years. Next, sensitivity and specificity of T2 maps for delineating NET from other tissues was examined. Then, T2-defined NET was used to predict survival in separate subsets of glioblastoma patients treated with radiation therapy, concurrent radiation and chemotherapy, or bevacizumab at recurrence. Results Variability in T2 in the ACR phantom was 3-5%. In training data, ROC analysis suggested that 125ms < T2 < 250ms could delineate NET with a sensitivity >90% and specificity >65%. Using this criterion, NET burden after completion of radiation therapy alone, or concurrent radiation therapy and chemotherapy, was shown to be predictive of survival (Cox, P<0.05), and the change in NET volume before and after bevacizumab therapy in recurrent glioblastoma was also a predictive of survival (P<0.05). Conclusions T2 maps using dual echo data are feasible, stable, and can be used to objectively define NET burden for use in brain tumor characterization, prognosis, and response assessment. The use of effective T2 maps for defining NET burden should be validated in a randomized clinical trial. PMID:25901082
Rolland, Yves; Dupuy, Charlotte; Abellan Van Kan, Gabor; Cesari, Matteo; Vellas, Bruno; Faruch, Marie; Dray, Cedric; de Souto Barreto, Philipe
2017-10-01
Screening for sarcopenia in daily practice can be challenging. Our objective was to explore whether the SARC-F questionnaire is a valid screening tool for sarcopenia (defined by the Foundation for the National Institutes of Health [FNIH] criteria). Moreover, we evaluated the physical performance of older women according to the SARC-F questionnaire. Cross-sectional study. Data from the Toulouse and Lyon EPIDémiologie de l'OStéoporose study (EPIDOS) on 3025 women living in the community (mean age: 80.5 ± 3.9 years), without a previous history of hip fracture, were assessed. The SARC-F self-report questionnaire score ranges from 0 to 10: a score ≥4 defines sarcopenia. The FNIH criteria uses handgrip strength (GS) and appendicular lean mass (ALM; assessed by DXA) divided by body mass index (BMI) to define sarcopenia. Outcome measures were the following performance-based tests: knee-extension strength, 6-m gait speed, and a repeated chair-stand test. The associations of sarcopenia with performance-based tests was examined using bootstrap multiple linear-regression models; adjusted R 2 determined the percentage variation for each outcome explained by the model. Prevalence of sarcopenia was 16.7% (n = 504) according to the SARC-F questionnaire and 1.8% (n = 49) using the FNIH criteria. Sensibility and specificity of the SARC-F to diagnose sarcopenia (defined by FNIH criteria) were 34% and 85%, respectively. Sarcopenic women defined by SARC-F had significantly lower physical performance than nonsarcopenic women. The SARC-F improved the ability to predict poor physical performance. The validity of the SARC-F questionnaire to screen for sarcopenia, when compared with the FNIH criteria, was limited. However, sarcopenia defined by the SARC-F questionnaire substantially improved the predictive value of clinical characteristics of patients to predict poor physical performance. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Nucleotide exchange and excision technology DNA shuffling and directed evolution.
Speck, Janina; Stebel, Sabine C; Arndt, Katja M; Müller, Kristian M
2011-01-01
Remarkable success in optimizing complex properties within DNA and proteins has been achieved by directed evolution. In contrast to various random mutagenesis methods and high-throughput selection methods, the number of available DNA shuffling procedures is limited, and protocols are often difficult to adjust. The strength of the nucleotide exchange and excision technology (NExT) DNA shuffling described here is the robust, efficient, and easily controllable DNA fragmentation step based on random incorporation of the so-called 'exchange nucleotides' by PCR. The exchange nucleotides are removed enzymatically, followed by chemical cleavage of the DNA backbone. The oligonucleotide pool is reassembled into full-length genes by internal primer extension, and the recombined gene library is amplified by standard PCR. The technique has been demonstrated by shuffling a defined gene library of chloramphenicol acetyltransferase variants using uridine as fragmentation defining exchange nucleotide. Substituting 33% of the dTTP with dUTP in the incorporation PCR resulted in shuffled clones with an average parental fragment size of 86 bases and revealed a mutation rate of only 0.1%. Additionally, a computer program (NExTProg) has been developed that predicts the fragment size distribution depending on the relative amount of the exchange nucleotide.
Coupling between Catalytic Loop Motions and Enzyme Global Dynamics
Kurkcuoglu, Zeynep; Bakan, Ahmet; Kocaman, Duygu; Bahar, Ivet; Doruker, Pemra
2012-01-01
Catalytic loop motions facilitate substrate recognition and binding in many enzymes. While these motions appear to be highly flexible, their functional significance suggests that structure-encoded preferences may play a role in selecting particular mechanisms of motions. We performed an extensive study on a set of enzymes to assess whether the collective/global dynamics, as predicted by elastic network models (ENMs), facilitates or even defines the local motions undergone by functional loops. Our dataset includes a total of 117 crystal structures for ten enzymes of different sizes and oligomerization states. Each enzyme contains a specific functional/catalytic loop (10–21 residues long) that closes over the active site during catalysis. Principal component analysis (PCA) of the available crystal structures (including apo and ligand-bound forms) for each enzyme revealed the dominant conformational changes taking place in these loops upon substrate binding. These experimentally observed loop reconfigurations are shown to be predominantly driven by energetically favored modes of motion intrinsically accessible to the enzyme in the absence of its substrate. The analysis suggests that robust global modes cooperatively defined by the overall enzyme architecture also entail local components that assist in suitable opening/closure of the catalytic loop over the active site. PMID:23028297
Child center closures: Does nonprofit status provide a comparative advantage?
Lam, Marcus; Klein, Sacha; Freisthler, Bridget; Weiss, Robert E
2013-03-01
Reliable access to dependable, high quality childcare services is a vital concern for large numbers of American families. The childcare industry consists of private nonprofit, private for-profit, and governmental providers that differ along many dimensions, including quality, clientele served, and organizational stability. Nonprofit providers are theorized to provide higher quality services given comparative tax advantages, higher levels of consumer trust, and management by mission driven entrepreneurs. This study examines the influence of ownership structure, defined as nonprofit, for-profit sole proprietors, for-profit companies, and governmental centers, on organizational instability, defined as childcare center closures. Using a cross sectional data set of 15724 childcare licenses in California for 2007, we model the predicted closures of childcare centers as a function of ownership structure as well as center age and capacity. Findings indicate that for small centers (capacity of 30 or less) nonprofits are more likely to close, but for larger centers (capacity 30+) nonprofits are less likely to close. This suggests that the comparative advantages available for nonprofit organizations may be better utilized by larger centers than by small centers. We consider the implications of our findings for parents, practitioners, and social policy.
Dynamics and Statistical Mechanics of Rotating and non-Rotating Vortical Flows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Chjan
Three projects were analyzed with the overall aim of developing a computational/analytical model for estimating values of the energy, angular momentum, enstrophy and total variation of fluid height at phase transitions between disordered and self-organized flow states in planetary atmospheres. It is believed that these transitions in equilibrium statistical mechanics models play a role in the construction of large-scale, stable structures including super-rotation in the Venusian atmosphere and the formation of the Great Red Spot on Jupiter. Exact solutions of the spherical energy-enstrophy models for rotating planetary atmospheres by Kac's method of steepest descent predicted phase transitions to super-rotating solid-bodymore » flows at high energy to enstrophy ratio for all planetary spins and to sub-rotating modes if the planetary spin is large enough. These canonical statistical ensembles are well-defined for the long-range energy interactions that arise from 2D fluid flows on compact oriented manifolds such as the surface of the sphere and torus. This is because in Fourier space available through Hodge theory, the energy terms are exactly diagonalizable and hence has zero range, leading to well-defined heat baths.« less
Long, Qi; Xu, Jianpeng; Osunkoya, Adeboye O; Sannigrahi, Soma; Johnson, Brent A; Zhou, Wei; Gillespie, Theresa; Park, Jong Y; Nam, Robert K; Sugar, Linda; Stanimirovic, Aleksandra; Seth, Arun K; Petros, John A; Moreno, Carlos S
2014-06-15
Prostate cancer remains the second leading cause of cancer death in American men and there is an unmet need for biomarkers to identify patients with aggressive disease. In an effort to identify biomarkers of recurrence, we performed global RNA sequencing on 106 formalin-fixed, paraffin-embedded prostatectomy samples from 100 patients at three independent sites, defining a 24-gene signature panel. The 24 genes in this panel function in cell-cycle progression, angiogenesis, hypoxia, apoptosis, PI3K signaling, steroid metabolism, translation, chromatin modification, and transcription. Sixteen genes have been associated with cancer, with five specifically associated with prostate cancer (BTG2, IGFBP3, SIRT1, MXI1, and FDPS). Validation was performed on an independent publicly available dataset of 140 patients, where the new signature panel outperformed markers published previously in terms of predicting biochemical recurrence. Our work also identified differences in gene expression between Gleason pattern 4 + 3 and 3 + 4 tumors, including several genes involved in the epithelial-to-mesenchymal transition and developmental pathways. Overall, this study defines a novel biomarker panel that has the potential to improve the clinical management of prostate cancer. ©2014 American Association for Cancer Research.
Child center closures: Does nonprofit status provide a comparative advantage?
Lam, Marcus; Klein, Sacha; Freisthler, Bridget; Weiss, Robert E.
2013-01-01
Reliable access to dependable, high quality childcare services is a vital concern for large numbers of American families. The childcare industry consists of private nonprofit, private for-profit, and governmental providers that differ along many dimensions, including quality, clientele served, and organizational stability. Nonprofit providers are theorized to provide higher quality services given comparative tax advantages, higher levels of consumer trust, and management by mission driven entrepreneurs. This study examines the influence of ownership structure, defined as nonprofit, for-profit sole proprietors, for-profit companies, and governmental centers, on organizational instability, defined as childcare center closures. Using a cross sectional data set of 15724 childcare licenses in California for 2007, we model the predicted closures of childcare centers as a function of ownership structure as well as center age and capacity. Findings indicate that for small centers (capacity of 30 or less) nonprofits are more likely to close, but for larger centers (capacity 30+) nonprofits are less likely to close. This suggests that the comparative advantages available for nonprofit organizations may be better utilized by larger centers than by small centers. We consider the implications of our findings for parents, practitioners, and social policy. PMID:23543882
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nobel, P.S.
Soil conditions were evaluated over the rooting depths for Agave deserti and Ferocactus acanthodes from the northwestern Sonoran Desert. These succulents have mean root depths of only 10 cm when adults and even shallower distribution when seedlings, which often occur is association with the nurse plant Hilaria rigida, which also has shallow roots. Maximum soil temperatures in the 2 cm beneath bare ground were predicted to exceed 65 C, which is lethal to the roots of A. deserti and F. acanthodes, whereas H. rigida reduced the maximum surface temperatures by over 10 C, providing a microhabitat suitable for seedling establishment.more » Water Availability was defined as the soil-to-plant drop in water potential, for periods when the plants could take up water, integrated over time. Below 4 cm under bare ground, simulated Water Availability increased slightly with depth (to 35 cm) for a wet year, was fairly constant for an average year, and decreased for a dry year, indicating that the shallow rooting habit is more advantageous in drier years. Water uptake by H. rigida substantially reduced Water Availability for seedlings associated with this nurse plant. On the other hand, a 66-90% higher soil nitrogen level occurred under H. rigida, possibly representing its harvesting of this macronutrient from a wide ground area. Phosphorus was slightly less abundant in the soil under H. rigida compared with under bare ground, the potassium level was substantially higher, and the sodium level was substantially lower. All four elements varied greatly with depth, N and K decreasing and P and Na increasing. Based on the known growth responses of A. deserti and F. acanthodes to these four elements, growth was predicted to be higher for plants in soil from the shallower layers, most of the differences being due to nitrogen.« less
The adverse outcome pathway (AOP) framework can be used to support the use of mechanistic toxicology data as a basis for risk assessment. For certain risk contexts this includes defining, quantitative linkages between the molecular initiating event (MIE) and subsequent key events...
Conditions for Effective Application of Analysis of Symmetrically-Predicted Endogenous Subgroups
ERIC Educational Resources Information Center
Peck, Laura R.
2015-01-01
Several analytic strategies exist for opening up the "black box" to reveal more about what drives policy and program impacts. This article focuses on one of these strategies: the Analysis of Symmetrically-Predicted Endogenous Subgroups (ASPES). ASPES uses exogenous baseline data to identify endogenously-defined subgroups, keeping the…
OVERBURDEN MINERALOGY AS RELATED TO GROUND-WATER CHEMICAL CHANGES IN COAL STRIP MINING
A research program was initiated to define and develop an inclusive, effective, and economical method for predicting potential ground-water quality changes resulting from the strip mining of coal in the Western United States. To utilize the predictive method, it is necessary to s...
Predictive model of outcome of targeted nodal assessment in colorectal cancer.
Nissan, Aviram; Protic, Mladjan; Bilchik, Anton; Eberhardt, John; Peoples, George E; Stojadinovic, Alexander
2010-02-01
Improvement in staging accuracy is the principal aim of targeted nodal assessment in colorectal carcinoma. Technical factors independently predictive of false negative (FN) sentinel lymph node (SLN) mapping should be identified to facilitate operative decision making. To define independent predictors of FN SLN mapping and to develop a predictive model that could support surgical decisions. Data was analyzed from 2 completed prospective clinical trials involving 278 patients with colorectal carcinoma undergoing SLN mapping. Clinical outcome of interest was FN SLN(s), defined as one(s) with no apparent tumor cells in the presence of non-SLN metastases. To assess the independent predictive effect of a covariate for a nominal response (FN SLN), a logistic regression model was constructed and parameters estimated using maximum likelihood. A probabilistic Bayesian model was also trained and cross validated using 10-fold train-and-test sets to predict FN SLN mapping. Area under the curve (AUC) from receiver operating characteristics curves of these predictions was calculated to determine the predictive value of the model. Number of SLNs (<3; P = 0.03) and tumor-replaced nodes (P < 0.01) independently predicted FN SLN. Cross validation of the model created with Bayesian Network Analysis effectively predicted FN SLN (area under the curve = 0.84-0.86). The positive and negative predictive values of the model are 83% and 97%, respectively. This study supports a minimum threshold of 3 nodes for targeted nodal assessment in colorectal cancer, and establishes sufficient basis to conclude that SLN mapping and biopsy cannot be justified in the presence of clinically apparent tumor-replaced nodes.
Habous, Mohamad; Tal, Raanan; Tealab, Alaa; Soliman, Tarek; Nassar, Mohammed; Mekawi, Zenhom; Mahmoud, Saad; Abdelwahab, Osama; Elkhouly, Mohamed; Kamr, Hatem; Remeah, Abdallah; Binsaleh, Saleh; Ralph, David; Mulhall, John
2018-02-01
To re-evaluate the role of diabetes mellitus (DM) as a risk factor for penile implant infection by exploring the association between glycated haemoglobin (HbA1c) levels and penile implant infection rates and to define a threshold value that predicts implant infection. We conducted a multicentre prospective study including all patients undergoing penile implant surgery between 2009 and 2015. Preoperative, perioperative and postoperative management were identical for the entire cohort. Univariate analysis was performed to define predictors of implant infection. The HbA1c levels were analysed as continuous variables and sequential analysis was conducted using 0.5% increments to define a threshold level predicting implant infection. Multivariable analysis was performed with the following factors entered in the model: DM, HbA1C level, patient age, implant type, number of vascular risk factors (VRFs), presence of Peyronie's disease (PD), body mass index (BMI), and surgeon volume. A receiver operating characteristic (ROC) curve was generated to define the optimal HbA1C threshold for infection prediction. In all, 902 implant procedures were performed over the study period. The mean patient age was 56.6 years. The mean HbA1c level was 8.0%, with 81% of men having a HbA1c level of >6%. In all, 685 (76%) implants were malleable and 217 (24%) were inflatable devices; 302 (33.5%) patients also had a diagnosis of PD. The overall infection rate was 8.9% (80/902). Patients who had implant infection had significantly higher mean HbA1c levels, 9.5% vs 7.8% (P < 0.001). Grouping the cases by HbA1c level, we found infection rates were: 1.3% with HbA1c level of <6.5%, 1.5% for 6.5-7.5%, 6.5% for 7.6-8.5%, 14.7% for 8.6-9.5%, 22.4% for >9.5% (P < 0.001). Patient age, implant type, and number of VRFs were not predictive. Predictors defined on multivariable analysis were: PD, high BMI, and high HbA1c level, whilst a high-volume surgeon had a protective effect and was associated with a reduced infection risk. Using ROC analysis, we determined that a HbA1c threshold level of 8.5% predicted infection with a sensitivity of 80% and a specificity of 65%. Uncontrolled DM is associated with increased risk of infection after penile implant surgery. The risk is directly related to the HbA1c level. A threshold HbA1c level of 8.5% is suggested for clinical use to identify patients at increased infection risk. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.
Study of the Effects of Metallurgical Factors on the Growth of Fatigue Microcracks.
1987-11-25
polycrystalline) yield stress. 8. The resulting model, predicated on the notion of orientation-dependent microplastic grains, predicts quantitatively the entire...Figure 5. Predicted crack growth curves for small cracks propagating from a microplastic grain into elastic-plastic, contiguous grains; Ao is defined as...or the crack tip opening *displacement, 6. Figure 5. Predicted crack growth curves for small cracks propagating from a microplastic grain into
Sudden gains in the outpatient treatment of anorexia nervosa: A process-outcome study.
Cartwright, Anna; Cheng, Yat Ping; Schmidt, Ulrike; Landau, Sabine
2017-10-01
Sudden gains (SGs), broadly defined as sudden symptom reductions occurring between two consecutive treatment sessions, have been associated with improved treatment outcomes in anxiety and depression. The present study is the first to formally define SGs in anorexia nervosa and explore the characteristics, demographic and baseline clinical predictors, and clinical impact of SGs in anorexia nervosa. This is a secondary analysis of data from 89 outpatients with broadly defined anorexia nervosa who received one of two psychotherapeutic interventions as part of the MOSAIC trial (Schmidt et al., 2015). SGs were defined using session-by-session body mass index (BMI) measures. This study investigated whether SGs were associated with changes in BMI, eating disorder symptomology, general psychopathology, and psychosocial impairment between baseline and 6, 12, and 24 months follow-up. SGs, experienced by 61.8% of patients, mostly occurred during the early and middle phases of treatment. A larger proportion of SGs predicted larger increases in BMI between baseline and 6, 12, and 24 months follow-up. Amongst those experiencing at least one SG, fewer days between baseline and a patient's first SG predicted a larger increase in BMI between baseline and both 6 and 12 months follow-up. The proportion and timing of SGs did not predict changes in other outcome measures. SGs in BMI during the outpatient treatment of anorexia nervosa are clinically useful predictors of longer-term weight outcomes. © 2017 Wiley Periodicals, Inc.
Li, Wen; Arasu, Vignesh; Newitt, David C.; Jones, Ella F.; Wilmes, Lisa; Gibbs, Jessica; Kornak, John; Joe, Bonnie N.; Esserman, Laura J.; Hylton, Nola M.
2016-01-01
Functional tumor volume (FTV) measurements by dynamic contrast-enhanced magnetic resonance imaging can predict treatment outcomes for women receiving neoadjuvant chemotherapy for breast cancer. Here, we explore whether the contrast thresholds used to define FTV could be adjusted by breast cancer subtype to improve predictive performance. Absolute FTV and percent change in FTV (ΔFTV) at sequential time-points during treatment were calculated and investigated as predictors of pathologic complete response at surgery. Early percent enhancement threshold (PEt) and signal enhancement ratio threshold (SERt) were varied. The predictive performance of resulting FTV predictors was evaluated using the area under the receiver operating characteristic curve. A total number of 116 patients were studied both as a full cohort and in the following groups defined by hormone receptor (HR) and HER2 receptor subtype: 45 HR+/HER2−, 39 HER2+, and 30 triple negatives. High AUCs were found at different ranges of PEt and SERt levels in different subtypes. Findings from this study suggest that the predictive performance to treatment response by MRI varies by contrast thresholds, and that pathologic complete response prediction may be improved through subtype-specific contrast enhancement thresholds. A validation study is underway with a larger patient population. PMID:28066808
Carroll, Suzanne J; Paquet, Catherine; Howard, Natasha J; Coffee, Neil T; Adams, Robert J; Taylor, Anne W; Niyonsenga, Theo; Daniel, Mark
2017-02-02
Individual-level health outcomes are shaped by environmental risk conditions. Norms figure prominently in socio-behavioural theories yet spatial variations in health-related norms have rarely been investigated as environmental risk conditions. This study assessed: 1) the contributions of local descriptive norms for overweight/obesity and dietary behaviour to 10-year change in glycosylated haemoglobin (HbA 1c ), accounting for food resource availability; and 2) whether associations between local descriptive norms and HbA 1c were moderated by food resource availability. HbA 1c , representing cardiometabolic risk, was measured three times over 10 years for a population-based biomedical cohort of adults in Adelaide, South Australia. Residential environmental exposures were defined using 1600 m participant-centred road-network buffers. Local descriptive norms for overweight/obesity and insufficient fruit intake (proportion of residents with BMI ≥ 25 kg/m 2 [n = 1890] or fruit intake of <2 serves/day [n = 1945], respectively) were aggregated from responses to a separate geocoded population survey. Fast-food and healthful food resource availability (counts) were extracted from a retail database. Separate sets of multilevel models included different predictors, one local descriptive norm and either fast-food or healthful food resource availability, with area-level education and individual-level covariates (age, sex, employment status, education, marital status, and smoking status). Interactions between local descriptive norms and food resource availability were tested. HbA 1c concentration rose over time. Local descriptive norms for overweight/obesity and insufficient fruit intake predicted greater rates of increase in HbA 1c . Neither fast-food nor healthful food resource availability were associated with change in HbA 1c . Greater healthful food resource availability reduced the rate of increase in HbA 1c concentration attributed to the overweight/obesity norm. Local descriptive health-related norms, not food resource availability, predicted 10-year change in HbA 1c . Null findings for food resource availability may reflect a sufficiency or minimum threshold level of resources such that availability poses no barrier to obtaining healthful or unhealthful foods for this region. However, the influence of local descriptive norms varied according to food resource availability in effects on HbA 1c . Local descriptive health-related norms have received little attention thus far but are important influences on individual cardiometabolic risk. Further research is needed to explore how local descriptive norms contribute to chronic disease risk and outcomes.
Wuchty, S; Rajagopala, S V; Blazie, S M; Parrish, J R; Khuri, S; Finley, R L; Uetz, P
2017-01-01
The functions of roughly a third of all proteins in Streptococcus pneumoniae , a significant human-pathogenic bacterium, are unknown. Using a yeast two-hybrid approach, we have determined more than 2,000 novel protein interactions in this organism. We augmented this network with meta-interactome data that we defined as the pool of all interactions between evolutionarily conserved proteins in other bacteria. We found that such interactions significantly improved our ability to predict a protein's function, allowing us to provide functional predictions for 299 S. pneumoniae proteins with previously unknown functions. IMPORTANCE Identification of protein interactions in bacterial species can help define the individual roles that proteins play in cellular pathways and pathogenesis. Very few protein interactions have been identified for the important human pathogen S. pneumoniae . We used an experimental approach to identify over 2,000 new protein interactions for S. pneumoniae , the most extensive interactome data for this bacterium to date. To predict protein function, we used our interactome data augmented with interactions from other closely related bacteria. The combination of the experimental data and meta-interactome data significantly improved the prediction results, allowing us to assign possible functions to a large number of poorly characterized proteins.