Comparison of Performance Predictions for New Low-Thrust Trajectory Tools
NASA Technical Reports Server (NTRS)
Polsgrove, Tara; Kos, Larry; Hopkins, Randall; Crane, Tracie
2006-01-01
Several low thrust trajectory optimization tools have been developed over the last 3% years by the Low Thrust Trajectory Tools development team. This toolset includes both low-medium fidelity and high fidelity tools which allow the analyst to quickly research a wide mission trade space and perform advanced mission design. These tools were tested using a set of reference trajectories that exercised each tool s unique capabilities. This paper compares the performance predictions of the various tools against several of the reference trajectories. The intent is to verify agreement between the high fidelity tools and to quantify the performance prediction differences between tools of different fidelity levels.
Sebok, Angelia; Wickens, Christopher D
2017-03-01
The objectives were to (a) implement theoretical perspectives regarding human-automation interaction (HAI) into model-based tools to assist designers in developing systems that support effective performance and (b) conduct validations to assess the ability of the models to predict operator performance. Two key concepts in HAI, the lumberjack analogy and black swan events, have been studied extensively. The lumberjack analogy describes the effects of imperfect automation on operator performance. In routine operations, an increased degree of automation supports performance, but in failure conditions, increased automation results in more significantly impaired performance. Black swans are the rare and unexpected failures of imperfect automation. The lumberjack analogy and black swan concepts have been implemented into three model-based tools that predict operator performance in different systems. These tools include a flight management system, a remotely controlled robotic arm, and an environmental process control system. Each modeling effort included a corresponding validation. In one validation, the software tool was used to compare three flight management system designs, which were ranked in the same order as predicted by subject matter experts. The second validation compared model-predicted operator complacency with empirical performance in the same conditions. The third validation compared model-predicted and empirically determined time to detect and repair faults in four automation conditions. The three model-based tools offer useful ways to predict operator performance in complex systems. The three tools offer ways to predict the effects of different automation designs on operator performance.
The development and testing of a skin tear risk assessment tool.
Newall, Nelly; Lewin, Gill F; Bulsara, Max K; Carville, Keryln J; Leslie, Gavin D; Roberts, Pam A
2017-02-01
The aim of the present study is to develop a reliable and valid skin tear risk assessment tool. The six characteristics identified in a previous case control study as constituting the best risk model for skin tear development were used to construct a risk assessment tool. The ability of the tool to predict skin tear development was then tested in a prospective study. Between August 2012 and September 2013, 1466 tertiary hospital patients were assessed at admission and followed up for 10 days to see if they developed a skin tear. The predictive validity of the tool was assessed using receiver operating characteristic (ROC) analysis. When the tool was found not to have performed as well as hoped, secondary analyses were performed to determine whether a potentially better performing risk model could be identified. The tool was found to have high sensitivity but low specificity and therefore have inadequate predictive validity. Secondary analysis of the combined data from this and the previous case control study identified an alternative better performing risk model. The tool developed and tested in this study was found to have inadequate predictive validity. The predictive validity of an alternative, more parsimonious model now needs to be tested. © 2015 Medicalhelplines.com Inc and John Wiley & Sons Ltd.
PredictSNP: Robust and Accurate Consensus Classifier for Prediction of Disease-Related Mutations
Bendl, Jaroslav; Stourac, Jan; Salanda, Ondrej; Pavelka, Antonin; Wieben, Eric D.; Zendulka, Jaroslav; Brezovsky, Jan; Damborsky, Jiri
2014-01-01
Single nucleotide variants represent a prevalent form of genetic variation. Mutations in the coding regions are frequently associated with the development of various genetic diseases. Computational tools for the prediction of the effects of mutations on protein function are very important for analysis of single nucleotide variants and their prioritization for experimental characterization. Many computational tools are already widely employed for this purpose. Unfortunately, their comparison and further improvement is hindered by large overlaps between the training datasets and benchmark datasets, which lead to biased and overly optimistic reported performances. In this study, we have constructed three independent datasets by removing all duplicities, inconsistencies and mutations previously used in the training of evaluated tools. The benchmark dataset containing over 43,000 mutations was employed for the unbiased evaluation of eight established prediction tools: MAPP, nsSNPAnalyzer, PANTHER, PhD-SNP, PolyPhen-1, PolyPhen-2, SIFT and SNAP. The six best performing tools were combined into a consensus classifier PredictSNP, resulting into significantly improved prediction performance, and at the same time returned results for all mutations, confirming that consensus prediction represents an accurate and robust alternative to the predictions delivered by individual tools. A user-friendly web interface enables easy access to all eight prediction tools, the consensus classifier PredictSNP and annotations from the Protein Mutant Database and the UniProt database. The web server and the datasets are freely available to the academic community at http://loschmidt.chemi.muni.cz/predictsnp. PMID:24453961
Reifman, Jaques; Kumar, Kamal; Wesensten, Nancy J; Tountas, Nikolaos A; Balkin, Thomas J; Ramakrishnan, Sridhar
2016-12-01
Computational tools that predict the effects of daily sleep/wake amounts on neurobehavioral performance are critical components of fatigue management systems, allowing for the identification of periods during which individuals are at increased risk for performance errors. However, none of the existing computational tools is publicly available, and the commercially available tools do not account for the beneficial effects of caffeine on performance, limiting their practical utility. Here, we introduce 2B-Alert Web, an open-access tool for predicting neurobehavioral performance, which accounts for the effects of sleep/wake schedules, time of day, and caffeine consumption, while incorporating the latest scientific findings in sleep restriction, sleep extension, and recovery sleep. We combined our validated Unified Model of Performance and our validated caffeine model to form a single, integrated modeling framework instantiated as a Web-enabled tool. 2B-Alert Web allows users to input daily sleep/wake schedules and caffeine consumption (dosage and time) to obtain group-average predictions of neurobehavioral performance based on psychomotor vigilance tasks. 2B-Alert Web is accessible at: https://2b-alert-web.bhsai.org. The 2B-Alert Web tool allows users to obtain predictions for mean response time, mean reciprocal response time, and number of lapses. The graphing tool allows for simultaneous display of up to seven different sleep/wake and caffeine schedules. The schedules and corresponding predicted outputs can be saved as a Microsoft Excel file; the corresponding plots can be saved as an image file. The schedules and predictions are erased when the user logs off, thereby maintaining privacy and confidentiality. The publicly accessible 2B-Alert Web tool is available for operators, schedulers, and neurobehavioral scientists as well as the general public to determine the impact of any given sleep/wake schedule, caffeine consumption, and time of day on performance of a group of individuals. This evidence-based tool can be used as a decision aid to design effective work schedules, guide the design of future sleep restriction and caffeine studies, and increase public awareness of the effects of sleep amounts, time of day, and caffeine on alertness. © 2016 Associated Professional Sleep Societies, LLC.
NASA Technical Reports Server (NTRS)
Sebok, Angelia; Wickens, Christopher; Sargent, Robert
2015-01-01
One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.
Predicting performance with traffic analysis tools : final report.
DOT National Transportation Integrated Search
2008-03-01
This document provides insights into the common pitfalls and challenges associated with use of traffic analysis tools for predicting future performance of a transportation facility. It provides five in-depth case studies that demonstrate common ways ...
Predicting Operator Execution Times Using CogTool
NASA Technical Reports Server (NTRS)
Santiago-Espada, Yamira; Latorella, Kara A.
2013-01-01
Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.
An automated benchmarking platform for MHC class II binding prediction methods.
Andreatta, Massimo; Trolle, Thomas; Yan, Zhen; Greenbaum, Jason A; Peters, Bjoern; Nielsen, Morten
2018-05-01
Computational methods for the prediction of peptide-MHC binding have become an integral and essential component for candidate selection in experimental T cell epitope discovery studies. The sheer amount of published prediction methods-and often discordant reports on their performance-poses a considerable quandary to the experimentalist who needs to choose the best tool for their research. With the goal to provide an unbiased, transparent evaluation of the state-of-the-art in the field, we created an automated platform to benchmark peptide-MHC class II binding prediction tools. The platform evaluates the absolute and relative predictive performance of all participating tools on data newly entered into the Immune Epitope Database (IEDB) before they are made public, thereby providing a frequent, unbiased assessment of available prediction tools. The benchmark runs on a weekly basis, is fully automated, and displays up-to-date results on a publicly accessible website. The initial benchmark described here included six commonly used prediction servers, but other tools are encouraged to join with a simple sign-up procedure. Performance evaluation on 59 data sets composed of over 10 000 binding affinity measurements suggested that NetMHCIIpan is currently the most accurate tool, followed by NN-align and the IEDB consensus method. Weekly reports on the participating methods can be found online at: http://tools.iedb.org/auto_bench/mhcii/weekly/. mniel@bioinformatics.dtu.dk. Supplementary data are available at Bioinformatics online.
Automated benchmarking of peptide-MHC class I binding predictions.
Trolle, Thomas; Metushi, Imir G; Greenbaum, Jason A; Kim, Yohan; Sidney, John; Lund, Ole; Sette, Alessandro; Peters, Bjoern; Nielsen, Morten
2015-07-01
Numerous in silico methods predicting peptide binding to major histocompatibility complex (MHC) class I molecules have been developed over the last decades. However, the multitude of available prediction tools makes it non-trivial for the end-user to select which tool to use for a given task. To provide a solid basis on which to compare different prediction tools, we here describe a framework for the automated benchmarking of peptide-MHC class I binding prediction tools. The framework runs weekly benchmarks on data that are newly entered into the Immune Epitope Database (IEDB), giving the public access to frequent, up-to-date performance evaluations of all participating tools. To overcome potential selection bias in the data included in the IEDB, a strategy was implemented that suggests a set of peptides for which different prediction methods give divergent predictions as to their binding capability. Upon experimental binding validation, these peptides entered the benchmark study. The benchmark has run for 15 weeks and includes evaluation of 44 datasets covering 17 MHC alleles and more than 4000 peptide-MHC binding measurements. Inspection of the results allows the end-user to make educated selections between participating tools. Of the four participating servers, NetMHCpan performed the best, followed by ANN, SMM and finally ARB. Up-to-date performance evaluations of each server can be found online at http://tools.iedb.org/auto_bench/mhci/weekly. All prediction tool developers are invited to participate in the benchmark. Sign-up instructions are available at http://tools.iedb.org/auto_bench/mhci/join. mniel@cbs.dtu.dk or bpeters@liai.org Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Automated benchmarking of peptide-MHC class I binding predictions
Trolle, Thomas; Metushi, Imir G.; Greenbaum, Jason A.; Kim, Yohan; Sidney, John; Lund, Ole; Sette, Alessandro; Peters, Bjoern; Nielsen, Morten
2015-01-01
Motivation: Numerous in silico methods predicting peptide binding to major histocompatibility complex (MHC) class I molecules have been developed over the last decades. However, the multitude of available prediction tools makes it non-trivial for the end-user to select which tool to use for a given task. To provide a solid basis on which to compare different prediction tools, we here describe a framework for the automated benchmarking of peptide-MHC class I binding prediction tools. The framework runs weekly benchmarks on data that are newly entered into the Immune Epitope Database (IEDB), giving the public access to frequent, up-to-date performance evaluations of all participating tools. To overcome potential selection bias in the data included in the IEDB, a strategy was implemented that suggests a set of peptides for which different prediction methods give divergent predictions as to their binding capability. Upon experimental binding validation, these peptides entered the benchmark study. Results: The benchmark has run for 15 weeks and includes evaluation of 44 datasets covering 17 MHC alleles and more than 4000 peptide-MHC binding measurements. Inspection of the results allows the end-user to make educated selections between participating tools. Of the four participating servers, NetMHCpan performed the best, followed by ANN, SMM and finally ARB. Availability and implementation: Up-to-date performance evaluations of each server can be found online at http://tools.iedb.org/auto_bench/mhci/weekly. All prediction tool developers are invited to participate in the benchmark. Sign-up instructions are available at http://tools.iedb.org/auto_bench/mhci/join. Contact: mniel@cbs.dtu.dk or bpeters@liai.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25717196
van Bokhorst-de van der Schueren, Marian A E; Guaitoli, Patrícia Realino; Jansma, Elise P; de Vet, Henrica C W
2014-02-01
Numerous nutrition screening tools for the hospital setting have been developed. The aim of this systematic review is to study construct or criterion validity and predictive validity of nutrition screening tools for the general hospital setting. A systematic review of English, French, German, Spanish, Portuguese and Dutch articles identified via MEDLINE, Cinahl and EMBASE (from inception to the 2nd of February 2012). Additional studies were identified by checking reference lists of identified manuscripts. Search terms included key words for malnutrition, screening or assessment instruments, and terms for hospital setting and adults. Data were extracted independently by 2 authors. Only studies expressing the (construct, criterion or predictive) validity of a tool were included. 83 studies (32 screening tools) were identified: 42 studies on construct or criterion validity versus a reference method and 51 studies on predictive validity on outcome (i.e. length of stay, mortality or complications). None of the tools performed consistently well to establish the patients' nutritional status. For the elderly, MNA performed fair to good, for the adults MUST performed fair to good. SGA, NRS-2002 and MUST performed well in predicting outcome in approximately half of the studies reviewed in adults, but not in older patients. Not one single screening or assessment tool is capable of adequate nutrition screening as well as predicting poor nutrition related outcome. Development of new tools seems redundant and will most probably not lead to new insights. New studies comparing different tools within one patient population are required. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
Assessment of driving-related skills for older drivers : traffic tech.
DOT National Transportation Integrated Search
2010-04-01
Relating behind-the-wheel driving performance to performance : on office-based screening tools is challenging. It is : important to use tools that are predictive of poor driving : performance (sensitivity), but also to find tools that do not : have h...
Cohen-Stavi, Chandra; Leventer-Roberts, Maya; Balicer, Ran D
2017-01-01
Objective To directly compare the performance and externally validate the three most studied prediction tools for osteoporotic fractures—QFracture, FRAX, and Garvan—using data from electronic health records. Design Retrospective cohort study. Setting Payer provider healthcare organisation in Israel. Participants 1 054 815 members aged 50 to 90 years for comparison between tools and cohorts of different age ranges, corresponding to those in each tools’ development study, for tool specific external validation. Main outcome measure First diagnosis of a major osteoporotic fracture (for QFracture and FRAX tools) and hip fractures (for all three tools) recorded in electronic health records from 2010 to 2014. Observed fracture rates were compared to probabilities predicted retrospectively as of 2010. Results The observed five year hip fracture rate was 2.7% and the rate for major osteoporotic fractures was 7.7%. The areas under the receiver operating curve (AUC) for hip fracture prediction were 82.7% for QFracture, 81.5% for FRAX, and 77.8% for Garvan. For major osteoporotic fractures, AUCs were 71.2% for QFracture and 71.4% for FRAX. All the tools underestimated the fracture risk, but the average observed to predicted ratios and the calibration slopes of FRAX were closest to 1. Tool specific validation analyses yielded hip fracture prediction AUCs of 88.0% for QFracture (among those aged 30-100 years), 81.5% for FRAX (50-90 years), and 71.2% for Garvan (60-95 years). Conclusions Both QFracture and FRAX had high discriminatory power for hip fracture prediction, with QFracture performing slightly better. This performance gap was more pronounced in previous studies, likely because of broader age inclusion criteria for QFracture validations. The simpler FRAX performed almost as well as QFracture for hip fracture prediction, and may have advantages if some of the input data required for QFracture are not available. However, both tools require calibration before implementation. PMID:28104610
Cockpit System Situational Awareness Modeling Tool
NASA Technical Reports Server (NTRS)
Keller, John; Lebiere, Christian; Shay, Rick; Latorella, Kara
2004-01-01
This project explored the possibility of predicting pilot situational awareness (SA) using human performance modeling techniques for the purpose of evaluating developing cockpit systems. The Improved Performance Research Integration Tool (IMPRINT) was combined with the Adaptive Control of Thought-Rational (ACT-R) cognitive modeling architecture to produce a tool that can model both the discrete tasks of pilots and the cognitive processes associated with SA. The techniques for using this tool to predict SA were demonstrated using the newly developed Aviation Weather Information (AWIN) system. By providing an SA prediction tool to cockpit system designers, cockpit concepts can be assessed early in the design process while providing a cost-effective complement to the traditional pilot-in-the-loop experiments and data collection techniques.
Evaluation of in silico tools to predict the skin sensitization potential of chemicals.
Verheyen, G R; Braeken, E; Van Deun, K; Van Miert, S
2017-01-01
Public domain and commercial in silico tools were compared for their performance in predicting the skin sensitization potential of chemicals. The packages were either statistical based (Vega, CASE Ultra) or rule based (OECD Toolbox, Toxtree, Derek Nexus). In practice, several of these in silico tools are used in gap filling and read-across, but here their use was limited to make predictions based on presence/absence of structural features associated to sensitization. The top 400 ranking substances of the ATSDR 2011 Priority List of Hazardous Substances were selected as a starting point. Experimental information was identified for 160 chemically diverse substances (82 positive and 78 negative). The prediction for skin sensitization potential was compared with the experimental data. Rule-based tools perform slightly better, with accuracies ranging from 0.6 (OECD Toolbox) to 0.78 (Derek Nexus), compared with statistical tools that had accuracies ranging from 0.48 (Vega) to 0.73 (CASE Ultra - LLNA weak model). Combining models increased the performance, with positive and negative predictive values up to 80% and 84%, respectively. However, the number of substances that were predicted positive or negative for skin sensitization in both models was low. Adding more substances to the dataset will increase the confidence in the conclusions reached. The insights obtained in this evaluation are incorporated in a web database www.asopus.weebly.com that provides a potential end user context for the scope and performance of different in silico tools with respect to a common dataset of curated skin sensitization data.
Kazaura, Kamugisha; Omae, Kazunori; Suzuki, Toshiji; Matsumoto, Mitsuji; Mutafungwa, Edward; Korhonen, Timo O; Murakami, Tadaaki; Takahashi, Koichi; Matsumoto, Hideki; Wakamori, Kazuhiko; Arimoto, Yoshinori
2006-06-12
The deterioration and deformation of a free-space optical beam wave-front as it propagates through the atmosphere can reduce the link availability and may introduce burst errors thus degrading the performance of the system. We investigate the suitability of utilizing soft-computing (SC) based tools for improving performance of free-space optical (FSO) communications systems. The SC based tools are used for the prediction of key parameters of a FSO communications system. Measured data collected from an experimental FSO communication system is used as training and testing data for a proposed multi-layer neural network predictor (MNNP) used to predict future parameter values. The predicted parameters are essential for reducing transmission errors by improving the antenna's accuracy of tracking data beams. This is particularly essential for periods considered to be of strong atmospheric turbulence. The parameter values predicted using the proposed tool show acceptable conformity with original measurements.
Price, C L; Brace-McDonnell, S J; Stallard, N; Bleetman, A; Maconochie, I; Perkins, G D
2016-05-01
Context Triage tools are an essential component of the emergency response to a major incident. Although fortunately rare, mass casualty incidents involving children are possible which mandate reliable triage tools to determine the priority of treatment. To determine the performance characteristics of five major incident triage tools amongst paediatric casualties who have sustained traumatic injuries. Retrospective observational cohort study using data from 31,292 patients aged less than 16 years who sustained a traumatic injury. Data were obtained from the UK Trauma Audit and Research Network (TARN) database. Interventions Statistical evaluation of five triage tools (JumpSTART, START, CareFlight, Paediatric Triage Tape/Sieve and Triage Sort) to predict death or severe traumatic injury (injury severity score >15). Main outcome measures Performance characteristics of triage tools (sensitivity, specificity and level of agreement between triage tools) to identify patients at high risk of death or severe injury. Of the 31,292 cases, 1029 died (3.3%), 6842 (21.9%) had major trauma (defined by an injury severity score >15) and 14,711 (47%) were aged 8 years or younger. There was variation in the performance accuracy of the tools to predict major trauma or death (sensitivities ranging between 36.4 and 96.2%; specificities 66.0-89.8%). Performance characteristics varied with the age of the child. CareFlight had the best overall performance at predicting death, with the following sensitivity and specificity (95% CI) respectively: 95.3% (93.8-96.8) and 80.4% (80.0-80.9). JumpSTART was superior for the triaging of children under 8 years; sensitivity and specificity (95% CI) respectively: 86.3% (83.1-89.5) and 84.8% (84.2-85.5). The triage tools were generally better at identifying patients who would die than those with non-fatal severe injury. This statistical evaluation has demonstrated variability in the accuracy of triage tools at predicting outcomes for children who sustain traumatic injuries. No single tool performed consistently well across all evaluated scenarios. Copyright © 2015 Elsevier Ltd. All rights reserved.
Comparison of in silico models for prediction of mutagenicity.
Bakhtyari, Nazanin G; Raitano, Giuseppa; Benfenati, Emilio; Martin, Todd; Young, Douglas
2013-01-01
Using a dataset with more than 6000 compounds, the performance of eight quantitative structure activity relationships (QSAR) models was evaluated: ACD/Tox Suite, Absorption, Distribution, Metabolism, Elimination, and Toxicity of chemical substances (ADMET) predictor, Derek, Toxicity Estimation Software Tool (T.E.S.T.), TOxicity Prediction by Komputer Assisted Technology (TOPKAT), Toxtree, CEASAR, and SARpy (SAR in python). In general, the results showed a high level of performance. To have a realistic estimate of the predictive ability, the results for chemicals inside and outside the training set for each model were considered. The effect of applicability domain tools (when available) on the prediction accuracy was also evaluated. The predictive tools included QSAR models, knowledge-based systems, and a combination of both methods. Models based on statistical QSAR methods gave better results.
Predicting space telerobotic operator training performance from human spatial ability assessment
NASA Astrophysics Data System (ADS)
Liu, Andrew M.; Oman, Charles M.; Galvan, Raquel; Natapoff, Alan
2013-11-01
Our goal was to determine whether existing tests of spatial ability can predict an astronaut's qualification test performance after robotic training. Because training astronauts to be qualified robotics operators is so long and expensive, NASA is interested in tools that can predict robotics performance before training begins. Currently, the Astronaut Office does not have a validated tool to predict robotics ability as part of its astronaut selection or training process. Commonly used tests of human spatial ability may provide such a tool to predict robotics ability. We tested the spatial ability of 50 active astronauts who had completed at least one robotics training course, then used logistic regression models to analyze the correlation between spatial ability test scores and the astronauts' performance in their evaluation test at the end of the training course. The fit of the logistic function to our data is statistically significant for several spatial tests. However, the prediction performance of the logistic model depends on the criterion threshold assumed. To clarify the critical selection issues, we show how the probability of correct classification vs. misclassification varies as a function of the mental rotation test criterion level. Since the costs of misclassification are low, the logistic models of spatial ability and robotic performance are reliable enough only to be used to customize regular and remedial training. We suggest several changes in tracking performance throughout robotics training that could improve the range and reliability of predictive models.
Lai, Fu-Jou; Chang, Hong-Tsun; Wu, Wei-Sheng
2015-01-01
Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Allowing users to select eight existing performance indices and 15 existing algorithms for comparison, our web tool benefits researchers who are eager to comprehensively and objectively evaluate the performance of their newly developed algorithm. Thus, our tool greatly expedites the progress in the research of computational identification of cooperative TF pairs.
2015-01-01
Background Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. Results The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Conclusions Allowing users to select eight existing performance indices and 15 existing algorithms for comparison, our web tool benefits researchers who are eager to comprehensively and objectively evaluate the performance of their newly developed algorithm. Thus, our tool greatly expedites the progress in the research of computational identification of cooperative TF pairs. PMID:26677932
Predictive Data Tools Find Uses in Schools
ERIC Educational Resources Information Center
Sparks, Sarah D.
2011-01-01
The use of analytic tools to predict student performance is exploding in higher education, and experts say the tools show even more promise for K-12 schools, in everything from teacher placement to dropout prevention. Use of such statistical techniques is hindered in precollegiate schools, however, by a lack of researchers trained to help…
In vitro models for the prediction of in vivo performance of oral dosage forms.
Kostewicz, Edmund S; Abrahamsson, Bertil; Brewster, Marcus; Brouwers, Joachim; Butler, James; Carlert, Sara; Dickinson, Paul A; Dressman, Jennifer; Holm, René; Klein, Sandra; Mann, James; McAllister, Mark; Minekus, Mans; Muenster, Uwe; Müllertz, Anette; Verwei, Miriam; Vertzoni, Maria; Weitschies, Werner; Augustijns, Patrick
2014-06-16
Accurate prediction of the in vivo biopharmaceutical performance of oral drug formulations is critical to efficient drug development. Traditionally, in vitro evaluation of oral drug formulations has focused on disintegration and dissolution testing for quality control (QC) purposes. The connection with in vivo biopharmaceutical performance has often been ignored. More recently, the switch to assessing drug products in a more biorelevant and mechanistic manner has advanced the understanding of drug formulation behavior. Notwithstanding this evolution, predicting the in vivo biopharmaceutical performance of formulations that rely on complex intraluminal processes (e.g. solubilization, supersaturation, precipitation…) remains extremely challenging. Concomitantly, the increasing demand for complex formulations to overcome low drug solubility or to control drug release rates urges the development of new in vitro tools. Development and optimizing innovative, predictive Oral Biopharmaceutical Tools is the main target of the OrBiTo project within the Innovative Medicines Initiative (IMI) framework. A combination of physico-chemical measurements, in vitro tests, in vivo methods, and physiology-based pharmacokinetic modeling is expected to create a unique knowledge platform, enabling the bottlenecks in drug development to be removed and the whole process of drug development to become more efficient. As part of the basis for the OrBiTo project, this review summarizes the current status of predictive in vitro assessment tools for formulation behavior. Both pharmacopoeia-listed apparatus and more advanced tools are discussed. Special attention is paid to major issues limiting the predictive power of traditional tools, including the simulation of dynamic changes in gastrointestinal conditions, the adequate reproduction of gastrointestinal motility, the simulation of supersaturation and precipitation, and the implementation of the solubility-permeability interplay. It is anticipated that the innovative in vitro biopharmaceutical tools arising from the OrBiTo project will lead to improved predictions for in vivo behavior of drug formulations in the GI tract. Copyright © 2013 Elsevier B.V. All rights reserved.
A community resource benchmarking predictions of peptide binding to MHC-I molecules.
Peters, Bjoern; Bui, Huynh-Hoa; Frankild, Sune; Nielson, Morten; Lundegaard, Claus; Kostem, Emrah; Basch, Derek; Lamberth, Kasper; Harndahl, Mikkel; Fleri, Ward; Wilson, Stephen S; Sidney, John; Lund, Ole; Buus, Soren; Sette, Alessandro
2006-06-09
Recognition of peptides bound to major histocompatibility complex (MHC) class I molecules by T lymphocytes is an essential part of immune surveillance. Each MHC allele has a characteristic peptide binding preference, which can be captured in prediction algorithms, allowing for the rapid scan of entire pathogen proteomes for peptide likely to bind MHC. Here we make public a large set of 48,828 quantitative peptide-binding affinity measurements relating to 48 different mouse, human, macaque, and chimpanzee MHC class I alleles. We use this data to establish a set of benchmark predictions with one neural network method and two matrix-based prediction methods extensively utilized in our groups. In general, the neural network outperforms the matrix-based predictions mainly due to its ability to generalize even on a small amount of data. We also retrieved predictions from tools publicly available on the internet. While differences in the data used to generate these predictions hamper direct comparisons, we do conclude that tools based on combinatorial peptide libraries perform remarkably well. The transparent prediction evaluation on this dataset provides tool developers with a benchmark for comparison of newly developed prediction methods. In addition, to generate and evaluate our own prediction methods, we have established an easily extensible web-based prediction framework that allows automated side-by-side comparisons of prediction methods implemented by experts. This is an advance over the current practice of tool developers having to generate reference predictions themselves, which can lead to underestimating the performance of prediction methods they are not as familiar with as their own. The overall goal of this effort is to provide a transparent prediction evaluation allowing bioinformaticians to identify promising features of prediction methods and providing guidance to immunologists regarding the reliability of prediction tools.
User manual of the CATSS system (version 1.0) communication analysis tool for space station
NASA Technical Reports Server (NTRS)
Tsang, C. S.; Su, Y. T.; Lindsey, W. C.
1983-01-01
The Communication Analysis Tool for the Space Station (CATSS) is a FORTRAN language software package capable of predicting the communications links performance for the Space Station (SS) communication and tracking (C & T) system. An interactive software package was currently developed to run on the DEC/VAX computers. The CATSS models and evaluates the various C & T links of the SS, which includes the modulation schemes such as Binary-Phase-Shift-Keying (BPSK), BPSK with Direct Sequence Spread Spectrum (PN/BPSK), and M-ary Frequency-Shift-Keying with Frequency Hopping (FH/MFSK). Optical Space Communication link is also included. CATSS is a C & T system engineering tool used to predict and analyze the system performance for different link environment. Identification of system weaknesses is achieved through evaluation of performance with varying system parameters. System tradeoff for different values of system parameters are made based on the performance prediction.
Fortuno, Cristina; James, Paul A; Young, Erin L; Feng, Bing; Olivier, Magali; Pesaran, Tina; Tavtigian, Sean V; Spurdle, Amanda B
2018-05-18
Clinical interpretation of germline missense variants represents a major challenge, including those in the TP53 Li-Fraumeni syndrome gene. Bioinformatic prediction is a key part of variant classification strategies. We aimed to optimize the performance of the Align-GVGD tool used for p53 missense variant prediction, and compare its performance to other bioinformatic tools (SIFT, PolyPhen-2) and ensemble methods (REVEL, BayesDel). Reference sets of assumed pathogenic and assumed benign variants were defined using functional and/or clinical data. Area under the curve and Matthews correlation coefficient (MCC) values were used as objective functions to select an optimized protein multi-sequence alignment with best performance for Align-GVGD. MCC comparison of tools using binary categories showed optimized Align-GVGD (C15 cut-off) combined with BayesDel (0.16 cut-off), or with REVEL (0.5 cut-off), to have the best overall performance. Further, a semi-quantitative approach using multiple tiers of bioinformatic prediction, validated using an independent set of non-functional and functional variants, supported use of Align-GVGD and BayesDel prediction for different strength of evidence levels in ACMG/AMP rules. We provide rationale for bioinformatic tool selection for TP53 variant classification, and have also computed relevant bioinformatic predictions for every possible p53 missense variant to facilitate their use by the scientific and medical community. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Knecht, Carolin; Mort, Matthew; Junge, Olaf; Cooper, David N.; Krawczak, Michael
2017-01-01
Abstract The in silico prediction of the functional consequences of mutations is an important goal of human pathogenetics. However, bioinformatic tools that classify mutations according to their functionality employ different algorithms so that predictions may vary markedly between tools. We therefore integrated nine popular prediction tools (PolyPhen-2, SNPs&GO, MutPred, SIFT, MutationTaster2, Mutation Assessor and FATHMM as well as conservation-based Grantham Score and PhyloP) into a single predictor. The optimal combination of these tools was selected by means of a wide range of statistical modeling techniques, drawing upon 10 029 disease-causing single nucleotide variants (SNVs) from Human Gene Mutation Database and 10 002 putatively ‘benign’ non-synonymous SNVs from UCSC. Predictive performance was found to be markedly improved by model-based integration, whilst maximum predictive capability was obtained with either random forest, decision tree or logistic regression analysis. A combination of PolyPhen-2, SNPs&GO, MutPred, MutationTaster2 and FATHMM was found to perform as well as all tools combined. Comparison of our approach with other integrative approaches such as Condel, CoVEC, CAROL, CADD, MetaSVM and MetaLR using an independent validation dataset, revealed the superiority of our newly proposed integrative approach. An online implementation of this approach, IMHOTEP (‘Integrating Molecular Heuristics and Other Tools for Effect Prediction’), is provided at http://www.uni-kiel.de/medinfo/cgi-bin/predictor/. PMID:28180317
An, Yi; Wang, Jiawei; Li, Chen; Leier, André; Marquez-Lago, Tatiana; Wilksch, Jonathan; Zhang, Yang; Webb, Geoffrey I; Song, Jiangning; Lithgow, Trevor
2018-01-01
Bacterial effector proteins secreted by various protein secretion systems play crucial roles in host-pathogen interactions. In this context, computational tools capable of accurately predicting effector proteins of the various types of bacterial secretion systems are highly desirable. Existing computational approaches use different machine learning (ML) techniques and heterogeneous features derived from protein sequences and/or structural information. These predictors differ not only in terms of the used ML methods but also with respect to the used curated data sets, the features selection and their prediction performance. Here, we provide a comprehensive survey and benchmarking of currently available tools for the prediction of effector proteins of bacterial types III, IV and VI secretion systems (T3SS, T4SS and T6SS, respectively). We review core algorithms, feature selection techniques, tool availability and applicability and evaluate the prediction performance based on carefully curated independent test data sets. In an effort to improve predictive performance, we constructed three ensemble models based on ML algorithms by integrating the output of all individual predictors reviewed. Our benchmarks demonstrate that these ensemble models outperform all the reviewed tools for the prediction of effector proteins of T3SS and T4SS. The webserver of the proposed ensemble methods for T3SS and T4SS effector protein prediction is freely available at http://tbooster.erc.monash.edu/index.jsp. We anticipate that this survey will serve as a useful guide for interested users and that the new ensemble predictors will stimulate research into host-pathogen relationships and inspiration for the development of new bioinformatics tools for predicting effector proteins of T3SS, T4SS and T6SS. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Leong, Ivone U S; Stuckey, Alexander; Lai, Daniel; Skinner, Jonathan R; Love, Donald R
2015-05-13
Long QT syndrome (LQTS) is an autosomal dominant condition predisposing to sudden death from malignant arrhythmia. Genetic testing identifies many missense single nucleotide variants of uncertain pathogenicity. Establishing genetic pathogenicity is an essential prerequisite to family cascade screening. Many laboratories use in silico prediction tools, either alone or in combination, or metaservers, in order to predict pathogenicity; however, their accuracy in the context of LQTS is unknown. We evaluated the accuracy of five in silico programs and two metaservers in the analysis of LQTS 1-3 gene variants. The in silico tools SIFT, PolyPhen-2, PROVEAN, SNPs&GO and SNAP, either alone or in all possible combinations, and the metaservers Meta-SNP and PredictSNP, were tested on 312 KCNQ1, KCNH2 and SCN5A gene variants that have previously been characterised by either in vitro or co-segregation studies as either "pathogenic" (283) or "benign" (29). The accuracy, sensitivity, specificity and Matthews Correlation Coefficient (MCC) were calculated to determine the best combination of in silico tools for each LQTS gene, and when all genes are combined. The best combination of in silico tools for KCNQ1 is PROVEAN, SNPs&GO and SIFT (accuracy 92.7%, sensitivity 93.1%, specificity 100% and MCC 0.70). The best combination of in silico tools for KCNH2 is SIFT and PROVEAN or PROVEAN, SNPs&GO and SIFT. Both combinations have the same scores for accuracy (91.1%), sensitivity (91.5%), specificity (87.5%) and MCC (0.62). In the case of SCN5A, SNAP and PROVEAN provided the best combination (accuracy 81.4%, sensitivity 86.9%, specificity 50.0%, and MCC 0.32). When all three LQT genes are combined, SIFT, PROVEAN and SNAP is the combination with the best performance (accuracy 82.7%, sensitivity 83.0%, specificity 80.0%, and MCC 0.44). Both metaservers performed better than the single in silico tools; however, they did not perform better than the best performing combination of in silico tools. The combination of in silico tools with the best performance is gene-dependent. The in silico tools reported here may have some value in assessing variants in the KCNQ1 and KCNH2 genes, but caution should be taken when the analysis is applied to SCN5A gene variants.
Rauh, Simone P; Rutters, Femke; van der Heijden, Amber A W A; Luimes, Thomas; Alssema, Marjan; Heymans, Martijn W; Magliano, Dianna J; Shaw, Jonathan E; Beulens, Joline W; Dekker, Jacqueline M
2018-02-01
Chronic cardiometabolic diseases, including cardiovascular disease (CVD), type 2 diabetes (T2D) and chronic kidney disease (CKD), share many modifiable risk factors and can be prevented using combined prevention programs. Valid risk prediction tools are needed to accurately identify individuals at risk. We aimed to validate a previously developed non-invasive risk prediction tool for predicting the combined 7-year-risk for chronic cardiometabolic diseases. The previously developed tool is stratified for sex and contains the predictors age, BMI, waist circumference, use of antihypertensives, smoking, family history of myocardial infarction/stroke, and family history of diabetes. This tool was externally validated, evaluating model performance using area under the receiver operating characteristic curve (AUC)-assessing discrimination-and Hosmer-Lemeshow goodness-of-fit (HL) statistics-assessing calibration. The intercept was recalibrated to improve calibration performance. The risk prediction tool was validated in 3544 participants from the Australian Diabetes, Obesity and Lifestyle Study (AusDiab). Discrimination was acceptable, with an AUC of 0.78 (95% CI 0.75-0.81) in men and 0.78 (95% CI 0.74-0.81) in women. Calibration was poor (HL statistic: p < 0.001), but improved considerably after intercept recalibration. Examination of individual outcomes showed that in men, AUC was highest for CKD (0.85 [95% CI 0.78-0.91]) and lowest for T2D (0.69 [95% CI 0.65-0.74]). In women, AUC was highest for CVD (0.88 [95% CI 0.83-0.94)]) and lowest for T2D (0.71 [95% CI 0.66-0.75]). Validation of our previously developed tool showed robust discriminative performance across populations. Model recalibration is recommended to account for different disease rates. Our risk prediction tool can be useful in large-scale prevention programs for identifying those in need of further risk profiling because of their increased risk for chronic cardiometabolic diseases.
Brand, Caroline; Lowe, Adrian; Hall, Stephen
2008-01-01
Background Patients with rheumatoid arthritis have a higher risk of low bone mineral density than normal age matched populations. There is limited evidence to support cost effectiveness of population screening in rheumatoid arthritis and case finding strategies have been proposed as a means to increase cost effectiveness of diagnostic screening for osteoporosis. This study aimed to assess the performance attributes of generic and rheumatoid arthritis specific clinical decision tools for diagnosing osteoporosis in a postmenopausal population with rheumatoid arthritis who attend ambulatory specialist rheumatology clinics. Methods A cross-sectional study of 127 ambulatory post-menopausal women with rheumatoid arthritis was performed. Patients currently receiving or who had previously received bone active therapy were excluded. Eligible women underwent clinical assessment and dual-energy-xray absorptiometry (DXA) bone mineral density assessment. Clinical decision tools, including those specific for rheumatoid arthritis, were compared to seven generic post-menopausal tools to predict osteoporosis (defined as T score < -2.5). Sensitivity, specificity, positive predictive and negative predictive values and area under the curve were assessed. The diagnostic attributes of the clinical decision tools were compared by examination of the area under the receiver-operator-curve. Results One hundred and twenty seven women participated. The median age was 62 (IQR 56–71) years. Median disease duration was 108 (60–168) months. Seventy two (57%) women had no record of a previous DXA examination. Eighty (63%) women had T scores at femoral neck or lumbar spine less than -1. The area under the ROC curve for clinical decision tool prediction of T score <-2.5 varied between 0.63 and 0.76. The rheumatoid arthritis specific decision tools did not perform better than generic tools, however, the National Osteoporosis Foundation score could potentially reduce the number of unnecessary DXA tests by approximately 45% in this population. Conclusion There was limited utility of clinical decision tools for predicting osteoporosis in this patient population. Fracture prediction tools that include risk factors independent of BMD are needed. PMID:18230132
Cost Minimization Using an Artificial Neural Network Sleep Apnea Prediction Tool for Sleep Studies
Teferra, Rahel A.; Grant, Brydon J. B.; Mindel, Jesse W.; Siddiqi, Tauseef A.; Iftikhar, Imran H.; Ajaz, Fatima; Aliling, Jose P.; Khan, Meena S.; Hoffmann, Stephen P.
2014-01-01
Rationale: More than a million polysomnograms (PSGs) are performed annually in the United States to diagnose obstructive sleep apnea (OSA). Third-party payers now advocate a home sleep test (HST), rather than an in-laboratory PSG, as the diagnostic study for OSA regardless of clinical probability, but the economic benefit of this approach is not known. Objectives: We determined the diagnostic performance of OSA prediction tools including the newly developed OSUNet, based on an artificial neural network, and performed a cost-minimization analysis when the prediction tools are used to identify patients who should undergo HST. Methods: The OSUNet was trained to predict the presence of OSA in a derivation group of patients who underwent an in-laboratory PSG (n = 383). Validation group 1 consisted of in-laboratory PSG patients (n = 149). The network was trained further in 33 patients who underwent HST and then was validated in a separate group of 100 HST patients (validation group 2). Likelihood ratios (LRs) were compared with two previously published prediction tools. The total costs from the use of the three prediction tools and the third-party approach within a clinical algorithm were compared. Measurements and Main Results: The OSUNet had a higher +LR in all groups compared with the STOP-BANG and the modified neck circumference (MNC) prediction tools. The +LRs for STOP-BANG, MNC, and OSUNet in validation group 1 were 1.1 (1.0–1.2), 1.3 (1.1–1.5), and 2.1 (1.4–3.1); and in validation group 2 they were 1.4 (1.1–1.7), 1.7 (1.3–2.2), and 3.4 (1.8–6.1), respectively. With an OSA prevalence less than 52%, the use of all three clinical prediction tools resulted in cost savings compared with the third-party approach. Conclusions: The routine requirement of an HST to diagnose OSA regardless of clinical probability is more costly compared with the use of OSA clinical prediction tools that identify patients who should undergo this procedure when OSA is expected to be present in less than half of the population. With OSA prevalence less than 40%, the OSUNet offers the greatest savings, which are substantial when the number of sleep studies done annually is considered. PMID:25068704
Ernst, Corinna; Hahnen, Eric; Engel, Christoph; Nothnagel, Michael; Weber, Jonas; Schmutzler, Rita K; Hauke, Jan
2018-03-27
The use of next-generation sequencing approaches in clinical diagnostics has led to a tremendous increase in data and a vast number of variants of uncertain significance that require interpretation. Therefore, prediction of the effects of missense mutations using in silico tools has become a frequently used approach. Aim of this study was to assess the reliability of in silico prediction as a basis for clinical decision making in the context of hereditary breast and/or ovarian cancer. We tested the performance of four prediction tools (Align-GVGD, SIFT, PolyPhen-2, MutationTaster2) using a set of 236 BRCA1/2 missense variants that had previously been classified by expert committees. However, a major pitfall in the creation of a reliable evaluation set for our purpose is the generally accepted classification of BRCA1/2 missense variants using the multifactorial likelihood model, which is partially based on Align-GVGD results. To overcome this drawback we identified 161 variants whose classification is independent of any previous in silico prediction. In addition to the performance as stand-alone tools we examined the sensitivity, specificity, accuracy and Matthews correlation coefficient (MCC) of combined approaches. PolyPhen-2 achieved the lowest sensitivity (0.67), specificity (0.67), accuracy (0.67) and MCC (0.39). Align-GVGD achieved the highest values of specificity (0.92), accuracy (0.92) and MCC (0.73), but was outperformed regarding its sensitivity (0.90) by SIFT (1.00) and MutationTaster2 (1.00). All tools suffered from poor specificities, resulting in an unacceptable proportion of false positive results in a clinical setting. This shortcoming could not be bypassed by combination of these tools. In the best case scenario, 138 families would be affected by the misclassification of neutral variants within the cohort of patients of the German Consortium for Hereditary Breast and Ovarian Cancer. We show that due to low specificities state-of-the-art in silico prediction tools are not suitable to predict pathogenicity of variants of uncertain significance in BRCA1/2. Thus, clinical consequences should never be based solely on in silico forecasts. However, our data suggests that SIFT and MutationTaster2 could be suitable to predict benignity, as both tools did not result in false negative predictions in our analysis.
Trace Replay and Network Simulation Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acun, Bilge; Jain, Nikhil; Bhatele, Abhinav
2015-03-23
TraceR is a trace reply tool built upon the ROSS-based CODES simulation framework. TraceR can be used for predicting network performances and understanding network behavior by simulating messaging in High Performance Computing applications on interconnection networks.
Trace Replay and Network Simulation Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Nikhil; Bhatele, Abhinav; Acun, Bilge
TraceR Is a trace replay tool built upon the ROSS-based CODES simulation framework. TraceR can be used for predicting network performance and understanding network behavior by simulating messaging In High Performance Computing applications on interconnection networks.
WORMHOLE: Novel Least Diverged Ortholog Prediction through Machine Learning
Sutphin, George L.; Mahoney, J. Matthew; Sheppard, Keith; Walton, David O.; Korstanje, Ron
2016-01-01
The rapid advancement of technology in genomics and targeted genetic manipulation has made comparative biology an increasingly prominent strategy to model human disease processes. Predicting orthology relationships between species is a vital component of comparative biology. Dozens of strategies for predicting orthologs have been developed using combinations of gene and protein sequence, phylogenetic history, and functional interaction with progressively increasing accuracy. A relatively new class of orthology prediction strategies combines aspects of multiple methods into meta-tools, resulting in improved prediction performance. Here we present WORMHOLE, a novel ortholog prediction meta-tool that applies machine learning to integrate 17 distinct ortholog prediction algorithms to identify novel least diverged orthologs (LDOs) between 6 eukaryotic species—humans, mice, zebrafish, fruit flies, nematodes, and budding yeast. Machine learning allows WORMHOLE to intelligently incorporate predictions from a wide-spectrum of strategies in order to form aggregate predictions of LDOs with high confidence. In this study we demonstrate the performance of WORMHOLE across each combination of query and target species. We show that WORMHOLE is particularly adept at improving LDO prediction performance between distantly related species, expanding the pool of LDOs while maintaining low evolutionary distance and a high level of functional relatedness between genes in LDO pairs. We present extensive validation, including cross-validated prediction of PANTHER LDOs and evaluation of evolutionary divergence and functional similarity, and discuss future applications of machine learning in ortholog prediction. A WORMHOLE web tool has been developed and is available at http://wormhole.jax.org/. PMID:27812085
WORMHOLE: Novel Least Diverged Ortholog Prediction through Machine Learning.
Sutphin, George L; Mahoney, J Matthew; Sheppard, Keith; Walton, David O; Korstanje, Ron
2016-11-01
The rapid advancement of technology in genomics and targeted genetic manipulation has made comparative biology an increasingly prominent strategy to model human disease processes. Predicting orthology relationships between species is a vital component of comparative biology. Dozens of strategies for predicting orthologs have been developed using combinations of gene and protein sequence, phylogenetic history, and functional interaction with progressively increasing accuracy. A relatively new class of orthology prediction strategies combines aspects of multiple methods into meta-tools, resulting in improved prediction performance. Here we present WORMHOLE, a novel ortholog prediction meta-tool that applies machine learning to integrate 17 distinct ortholog prediction algorithms to identify novel least diverged orthologs (LDOs) between 6 eukaryotic species-humans, mice, zebrafish, fruit flies, nematodes, and budding yeast. Machine learning allows WORMHOLE to intelligently incorporate predictions from a wide-spectrum of strategies in order to form aggregate predictions of LDOs with high confidence. In this study we demonstrate the performance of WORMHOLE across each combination of query and target species. We show that WORMHOLE is particularly adept at improving LDO prediction performance between distantly related species, expanding the pool of LDOs while maintaining low evolutionary distance and a high level of functional relatedness between genes in LDO pairs. We present extensive validation, including cross-validated prediction of PANTHER LDOs and evaluation of evolutionary divergence and functional similarity, and discuss future applications of machine learning in ortholog prediction. A WORMHOLE web tool has been developed and is available at http://wormhole.jax.org/.
NASA Technical Reports Server (NTRS)
Ling, Lisa
2014-01-01
For the purpose of performing safety analysis and risk assessment for a potential off-nominal atmospheric reentry resulting in vehicle breakup, a synthesis of trajectory propagation coupled with thermal analysis and the evaluation of node failure is required to predict the sequence of events, the timeline, and the progressive demise of spacecraft components. To provide this capability, the Simulation for Prediction of Entry Article Demise (SPEAD) analysis tool was developed. The software and methodology have been validated against actual flights, telemetry data, and validated software, and safety/risk analyses were performed for various programs using SPEAD. This report discusses the capabilities, modeling, validation, and application of the SPEAD analysis tool.
Palese, Alvisa; Marini, Eva; Guarnier, Annamaria; Barelli, Paolo; Zambiasi, Paola; Allegrini, Elisabetta; Bazoli, Letizia; Casson, Paola; Marin, Meri; Padovan, Marisa; Picogna, Michele; Taddia, Patrizia; Chiari, Paolo; Salmaso, Daniele; Marognolli, Oliva; Canzan, Federica; Ambrosi, Elisa; Saiani, Luisa; Grassetti, Luca
2016-10-01
There is growing interest in validating tools aimed at supporting the clinical decision-making process and research. However, an increased bureaucratization of clinical practice and redundancies in the measures collected have been reported by clinicians. Redundancies in clinical assessments affect negatively both patients and nurses. To validate a meta-tool measuring the risks/problems currently estimated by multiple tools used in daily practice. A secondary analysis of a database was performed, using a cross-validation and a longitudinal study designs. In total, 1464 patients admitted to 12 medical units in 2012 were assessed at admission with the Brass, Barthel, Conley and Braden tools. Pertinent outcomes such as the occurrence of post-discharge need for resources and functional decline at discharge, as well as falls and pressure sores, were measured. Explorative factor analysis of each tool, inter-tool correlations and a conceptual evaluation of the redundant/similar items across tools were performed. Therefore, the validation of the meta-tool was performed through explorative factor analysis, confirmatory factor analysis and the structural equation model to establish the ability of the meta-tool to predict the outcomes estimated by the original tools. High correlations between the tools have emerged (from r 0.428 to 0.867) with a common variance from 18.3% to 75.1%. Through a conceptual evaluation and explorative factor analysis, the items were reduced from 42 to 20, and the three factors that emerged were confirmed by confirmatory factor analysis. According to the structural equation model results, two out of three emerged factors predicted the outcomes. From the initial 42 items, the meta-tool is composed of 20 items capable of predicting the outcomes as with the original tools. © 2016 John Wiley & Sons, Ltd.
Prognostic and Prediction Tools in Bladder Cancer: A Comprehensive Review of the Literature.
Kluth, Luis A; Black, Peter C; Bochner, Bernard H; Catto, James; Lerner, Seth P; Stenzl, Arnulf; Sylvester, Richard; Vickers, Andrew J; Xylinas, Evanguelos; Shariat, Shahrokh F
2015-08-01
This review focuses on risk assessment and prediction tools for bladder cancer (BCa). To review the current knowledge on risk assessment and prediction tools to enhance clinical decision making and counseling of patients with BCa. A literature search in English was performed using PubMed in July 2013. Relevant risk assessment and prediction tools for BCa were selected. More than 1600 publications were retrieved. Special attention was given to studies that investigated the clinical benefit of a prediction tool. Most prediction tools for BCa focus on the prediction of disease recurrence and progression in non-muscle-invasive bladder cancer or disease recurrence and survival after radical cystectomy. Although these tools are helpful, recent prediction tools aim to address a specific clinical problem, such as the prediction of organ-confined disease and lymph node metastasis to help identify patients who might benefit from neoadjuvant chemotherapy. Although a large number of prediction tools have been reported in recent years, many of them lack external validation. Few studies have investigated the clinical utility of any given model as measured by its ability to improve clinical decision making. There is a need for novel biomarkers to improve the accuracy and utility of prediction tools for BCa. Decision tools hold the promise of facilitating the shared decision process, potentially improving clinical outcomes for BCa patients. Prediction models need external validation and assessment of clinical utility before they can be incorporated into routine clinical care. We looked at models that aim to predict outcomes for patients with bladder cancer (BCa). We found a large number of prediction models that hold the promise of facilitating treatment decisions for patients with BCa. However, many models are missing confirmation in a different patient cohort, and only a few studies have tested the clinical utility of any given model as measured by its ability to improve clinical decision making. Copyright © 2015 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Oulas, Anastasis; Karathanasis, Nestoras; Louloupi, Annita; Pavlopoulos, Georgios A; Poirazi, Panayiota; Kalantidis, Kriton; Iliopoulos, Ioannis
2015-01-01
Computational methods for miRNA target prediction are currently undergoing extensive review and evaluation. There is still a great need for improvement of these tools and bioinformatics approaches are looking towards high-throughput experiments in order to validate predictions. The combination of large-scale techniques with computational tools will not only provide greater credence to computational predictions but also lead to the better understanding of specific biological questions. Current miRNA target prediction tools utilize probabilistic learning algorithms, machine learning methods and even empirical biologically defined rules in order to build models based on experimentally verified miRNA targets. Large-scale protein downregulation assays and next-generation sequencing (NGS) are now being used to validate methodologies and compare the performance of existing tools. Tools that exhibit greater correlation between computational predictions and protein downregulation or RNA downregulation are considered the state of the art. Moreover, efficiency in prediction of miRNA targets that are concurrently verified experimentally provides additional validity to computational predictions and further highlights the competitive advantage of specific tools and their efficacy in extracting biologically significant results. In this review paper, we discuss the computational methods for miRNA target prediction and provide a detailed comparison of methodologies and features utilized by each specific tool. Moreover, we provide an overview of current state-of-the-art high-throughput methods used in miRNA target prediction.
Building a generalized distributed system model
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi; Foudriat, E. C.
1991-01-01
A modeling tool for both analysis and design of distributed systems is discussed. Since many research institutions have access to networks of workstations, the researchers decided to build a tool running on top of the workstations to function as a prototype as well as a distributed simulator for a computing system. The effects of system modeling on performance prediction in distributed systems and the effect of static locking and deadlocks on the performance predictions of distributed transactions are also discussed. While the probability of deadlock is considerably small, its effects on performance could be significant.
ERIC Educational Resources Information Center
Bekele, Rahel; McPherson, Maggie
2011-01-01
This research work presents a Bayesian Performance Prediction Model that was created in order to determine the strength of personality traits in predicting the level of mathematics performance of high school students in Addis Ababa. It is an automated tool that can be used to collect information from students for the purpose of effective group…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G.P.; Burns, H.H.; Langton, C.
2013-07-01
The Cementitious Barriers Partnership (CBP) Project is a multi-disciplinary, multi-institutional collaboration supported by the U.S. Department of Energy (US DOE) Office of Tank Waste and Nuclear Materials Management. The CBP program has developed a set of integrated tools (based on state-of-the-art models and leaching test methods) that help improve understanding and predictions of the long-term structural, hydraulic and chemical performance of cementitious barriers used in nuclear applications. Tools selected for and developed under this program have been used to evaluate and predict the behavior of cementitious barriers used in near-surface engineered waste disposal systems for periods of performance up tomore » 100 years and longer for operating facilities and longer than 1000 years for waste disposal. The CBP Software Toolbox has produced tangible benefits to the DOE Performance Assessment (PA) community. A review of prior DOE PAs has provided a list of potential opportunities for improving cementitious barrier performance predictions through the use of the CBP software tools. These opportunities include: 1) impact of atmospheric exposure to concrete and grout before closure, such as accelerated slag and Tc-99 oxidation, 2) prediction of changes in K{sub d}/mobility as a function of time that result from changing pH and redox conditions, 3) concrete degradation from rebar corrosion due to carbonation, 4) early age cracking from drying and/or thermal shrinkage and 5) degradation due to sulfate attack. The CBP has already had opportunity to provide near-term, tangible support to ongoing DOE-EM PAs such as the Savannah River Saltstone Disposal Facility (SDF) by providing a sulfate attack analysis that predicts the extent and damage that sulfate ingress will have on the concrete vaults over extended time (i.e., > 1000 years). This analysis is one of the many technical opportunities in cementitious barrier performance that can be addressed by the DOE-EM sponsored CBP software tools. Modification of the existing tools can provide many opportunities to bring defense in depth in prediction of the performance of cementitious barriers over time. (authors)« less
Fuzzy regression modeling for tool performance prediction and degradation detection.
Li, X; Er, M J; Lim, B S; Zhou, J H; Gan, O P; Rutkowski, L
2010-10-01
In this paper, the viability of using Fuzzy-Rule-Based Regression Modeling (FRM) algorithm for tool performance and degradation detection is investigated. The FRM is developed based on a multi-layered fuzzy-rule-based hybrid system with Multiple Regression Models (MRM) embedded into a fuzzy logic inference engine that employs Self Organizing Maps (SOM) for clustering. The FRM converts a complex nonlinear problem to a simplified linear format in order to further increase the accuracy in prediction and rate of convergence. The efficacy of the proposed FRM is tested through a case study - namely to predict the remaining useful life of a ball nose milling cutter during a dry machining process of hardened tool steel with a hardness of 52-54 HRc. A comparative study is further made between four predictive models using the same set of experimental data. It is shown that the FRM is superior as compared with conventional MRM, Back Propagation Neural Networks (BPNN) and Radial Basis Function Networks (RBFN) in terms of prediction accuracy and learning speed.
In-silico wear prediction for knee replacements--methodology and corroboration.
Strickland, M A; Taylor, M
2009-07-22
The capability to predict in-vivo wear of knee replacements is a valuable pre-clinical analysis tool for implant designers. Traditionally, time-consuming experimental tests provided the principal means of investigating wear. Today, computational models offer an alternative. However, the validity of these models has not been demonstrated across a range of designs and test conditions, and several different formulas are in contention for estimating wear rates, limiting confidence in the predictive power of these in-silico models. This study collates and retrospectively simulates a wide range of experimental wear tests using fast rigid-body computational models with extant wear prediction algorithms, to assess the performance of current in-silico wear prediction tools. The number of tests corroborated gives a broader, more general assessment of the performance of these wear-prediction tools, and provides better estimates of the wear 'constants' used in computational models. High-speed rigid-body modelling allows a range of alternative algorithms to be evaluated. Whilst most cross-shear (CS)-based models perform comparably, the 'A/A+B' wear model appears to offer the best predictive power amongst existing wear algorithms. However, the range and variability of experimental data leaves considerable uncertainty in the results. More experimental data with reduced variability and more detailed reporting of studies will be necessary to corroborate these models with greater confidence. With simulation times reduced to only a few minutes, these models are ideally suited to large-volume 'design of experiment' or probabilistic studies (which are essential if pre-clinical assessment tools are to begin addressing the degree of variation observed clinically and in explanted components).
A numerical tool for reproducing driver behaviour: experiments and predictive simulations.
Casucci, M; Marchitto, M; Cacciabue, P C
2010-03-01
This paper presents the simulation tool called SDDRIVE (Simple Simulation of Driver performance), which is the numerical computerised implementation of the theoretical architecture describing Driver-Vehicle-Environment (DVE) interactions, contained in Cacciabue and Carsten [Cacciabue, P.C., Carsten, O. A simple model of driver behaviour to sustain design and safety assessment of automated systems in automotive environments, 2010]. Following a brief description of the basic algorithms that simulate the performance of drivers, the paper presents and discusses a set of experiments carried out in a Virtual Reality full scale simulator for validating the simulation. Then the predictive potentiality of the tool is shown by discussing two case studies of DVE interactions, performed in the presence of different driver attitudes in similar traffic conditions.
Driving and Low Vision: Validity of Assessments for Predicting Performance of Drivers
ERIC Educational Resources Information Center
Strong, J. Graham; Jutai, Jeffrey W.; Russell-Minda, Elizabeth; Evans, Mal
2008-01-01
The authors conducted a systematic review to examine whether vision-related assessments can predict the driving performance of individuals who have low vision. The results indicate that measures of visual field, contrast sensitivity, cognitive and attention-based tests, and driver screening tools have variable utility for predicting real-world…
Guidelines for reporting and using prediction tools for genetic variation analysis.
Vihinen, Mauno
2013-02-01
Computational prediction methods are widely used for the analysis of human genome sequence variants and their effects on gene/protein function, splice site aberration, pathogenicity, and disease risk. New methods are frequently developed. We believe that guidelines are essential for those writing articles about new prediction methods, as well as for those applying these tools in their research, so that the necessary details are reported. This will enable readers to gain the full picture of technical information, performance, and interpretation of results, and to facilitate comparisons of related methods. Here, we provide instructions on how to describe new methods, report datasets, and assess the performance of predictive tools. We also discuss what details of predictor implementation are essential for authors to understand. Similarly, these guidelines for the use of predictors provide instructions on what needs to be delineated in the text, as well as how researchers can avoid unwarranted conclusions. They are applicable to most prediction methods currently utilized. By applying these guidelines, authors will help reviewers, editors, and readers to more fully comprehend prediction methods and their use. © 2012 Wiley Periodicals, Inc.
Integrating Cache Performance Modeling and Tuning Support in Parallelization Tools
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
With the resurgence of distributed shared memory (DSM) systems based on cache-coherent Non Uniform Memory Access (ccNUMA) architectures and increasing disparity between memory and processors speeds, data locality overheads are becoming the greatest bottlenecks in the way of realizing potential high performance of these systems. While parallelization tools and compilers facilitate the users in porting their sequential applications to a DSM system, a lot of time and effort is needed to tune the memory performance of these applications to achieve reasonable speedup. In this paper, we show that integrating cache performance modeling and tuning support within a parallelization environment can alleviate this problem. The Cache Performance Modeling and Prediction Tool (CPMP), employs trace-driven simulation techniques without the overhead of generating and managing detailed address traces. CPMP predicts the cache performance impact of source code level "what-if" modifications in a program to assist a user in the tuning process. CPMP is built on top of a customized version of the Computer Aided Parallelization Tools (CAPTools) environment. Finally, we demonstrate how CPMP can be applied to tune a real Computational Fluid Dynamics (CFD) application.
Aerodynamics and thermal physics of helicopter ice accretion
NASA Astrophysics Data System (ADS)
Han, Yiqiang
Ice accretion on aircraft introduces significant loss in airfoil performance. Reduced lift-to- drag ratio reduces the vehicle capability to maintain altitude and also limits its maneuverability. Current ice accretion performance degradation modeling approaches are calibrated only to a limited envelope of liquid water content, impact velocity, temperature, and water droplet size; consequently inaccurate aerodynamic performance degradations are estimated. The reduced ice accretion prediction capabilities in the glaze ice regime are primarily due to a lack of knowledge of surface roughness induced by ice accretion. A comprehensive understanding of the ice roughness effects on airfoil heat transfer, ice accretion shapes, and ultimately aerodynamics performance is critical for the design of ice protection systems. Surface roughness effects on both heat transfer and aerodynamic performance degradation on airfoils have been experimentally evaluated. Novel techniques, such as ice molding and casting methods and transient heat transfer measurement using non-intrusive thermal imaging methods, were developed at the Adverse Environment Rotor Test Stand (AERTS) facility at Penn State. A novel heat transfer scaling method specifically for turbulent flow regime was also conceived. A heat transfer scaling parameter, labeled as Coefficient of Stanton and Reynolds Number (CSR = Stx/Rex --0.2), has been validated against reference data found in the literature for rough flat plates with Reynolds number (Re) up to 1x107, for rough cylinders with Re ranging from 3x104 to 4x106, and for turbine blades with Re from 7.5x105 to 7x106. This is the first time that the effect of Reynolds number is shown to be successfully eliminated on heat transfer magnitudes measured on rough surfaces. Analytical models for ice roughness distribution, heat transfer prediction, and aerodynamics performance degradation due to ice accretion have also been developed. The ice roughness prediction model was developed based on a set of 82 experimental measurements and also compared to existing predictions tools. Two reference predictions found in the literature yielded 76% and 54% discrepancy with respect to experimental testing, whereas the proposed ice roughness prediction model resulted in a 31% minimum accuracy in prediction. It must be noted that the accuracy of the proposed model is within the ice shape reproduction uncertainty of icing facilities. Based on the new ice roughness prediction model and the CSR heat transfer scaling method, an icing heat transfer model was developed. The approach achieved high accuracy in heat transfer prediction compared to experiments conducted at the AERTS facility. The discrepancy between predictions and experimental results was within +/-15%, which was within the measurement uncertainty range of the facility. By combining both the ice roughness and heat transfer predictions, and incorporating the modules into an existing ice prediction tool (LEWICE), improved prediction capability was obtained, especially for the glaze regime. With the available ice shapes accreted at the AERTS facility and additional experiments found in the literature, 490 sets of experimental ice shapes and corresponding aerodynamics testing data were available. A physics-based performance degradation empirical tool was developed and achieved a mean absolute deviation of 33% when compared to the entire experimental dataset, whereas 60% to 243% discrepancies were observed using legacy drag penalty prediction tools. Rotor torque predictions coupling Blade Element Momentum Theory and the proposed drag performance degradation tool was conducted on a total of 17 validation cases. The coupled prediction tool achieved a 10% predicting error for clean rotor conditions, and 16% error for iced rotor conditions. It was shown that additional roughness element could affect the measured drag by up to 25% during experimental testing, emphasizing the need of realistic ice structures during aerodynamics modeling and testing for ice accretion.
Mysara, Mohamed; Elhefnawi, Mahmoud; Garibaldi, Jonathan M
2012-06-01
The investigation of small interfering RNA (siRNA) and its posttranscriptional gene-regulation has become an extremely important research topic, both for fundamental reasons and for potential longer-term therapeutic benefits. Several factors affect the functionality of siRNA including positional preferences, target accessibility and other thermodynamic features. State of the art tools aim to optimize the selection of target siRNAs by identifying those that may have high experimental inhibition. Such tools implement artificial neural network models as Biopredsi and ThermoComposition21, and linear regression models as DSIR, i-Score and Scales, among others. However, all these models have limitations in performance. In this work, a neural-network trained new siRNA scoring/efficacy prediction model was developed based on combining two existing scoring algorithms (ThermoComposition21 and i-Score), together with the whole stacking energy (ΔG), in a multi-layer artificial neural network. These three parameters were chosen after a comparative combinatorial study between five well known tools. Our developed model, 'MysiRNA' was trained on 2431 siRNA records and tested using three further datasets. MysiRNA was compared with 11 alternative existing scoring tools in an evaluation study to assess the predicted and experimental siRNA efficiency where it achieved the highest performance both in terms of correlation coefficient (R(2)=0.600) and receiver operating characteristics analysis (AUC=0.808), improving the prediction accuracy by up to 18% with respect to sensitivity and specificity of the best available tools. MysiRNA is a novel, freely accessible model capable of predicting siRNA inhibition efficiency with improved specificity and sensitivity. This multiclassifier approach could help improve the performance of prediction in several bioinformatics areas. MysiRNA model, part of MysiRNA-Designer package [1], is expected to play a key role in siRNA selection and evaluation. Copyright © 2012 Elsevier Inc. All rights reserved.
Validation of Predictors of Fall Events in Hospitalized Patients With Cancer.
Weed-Pfaff, Samantha H; Nutter, Benjamin; Bena, James F; Forney, Jennifer; Field, Rosemary; Szoka, Lynn; Karius, Diana; Akins, Patti; Colvin, Christina M; Albert, Nancy M
2016-10-01
A seven-item cancer-specific fall risk tool (Cleveland Clinic Capone-Albert [CC-CA] Fall Risk Score) was shown to have a strong concordance index for predicting falls; however, validation of the model is needed. The aims of this study were to validate that the CC-CA Fall Risk Score, made up of six factors, predicts falls in patients with cancer and to determine if the CC-CA Fall Risk Score performs better than the Morse Fall Tool. Using a prospective, comparative methodology, data were collected from electronic health records of patients hospitalized for cancer care in four hospitals. Risk factors from each tool were recorded, when applicable. Multivariable models were created to predict the probability of a fall. A concordance index for each fall tool was calculated. The CC-CA Fall Risk Score provided higher discrimination than the Morse Fall Tool in predicting fall events in patients hospitalized for cancer management.
Revel8or: Model Driven Capacity Planning Tool Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Liming; Liu, Yan; Bui, Ngoc B.
2007-05-31
Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of designmore » diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Callaghan, Michael E., E-mail: elspeth.raymond@health.sa.gov.au; Freemasons Foundation Centre for Men's Health, University of Adelaide; Urology Unit, Repatriation General Hospital, SA Health, Flinders Centre for Innovation in Cancer
Purpose: To identify, through a systematic review, all validated tools used for the prediction of patient-reported outcome measures (PROMs) in patients being treated with radiation therapy for prostate cancer, and provide a comparative summary of accuracy and generalizability. Methods and Materials: PubMed and EMBASE were searched from July 2007. Title/abstract screening, full text review, and critical appraisal were undertaken by 2 reviewers, whereas data extraction was performed by a single reviewer. Eligible articles had to provide a summary measure of accuracy and undertake internal or external validation. Tools were recommended for clinical implementation if they had been externally validated and foundmore » to have accuracy ≥70%. Results: The search strategy identified 3839 potential studies, of which 236 progressed to full text review and 22 were included. From these studies, 50 tools predicted gastrointestinal/rectal symptoms, 29 tools predicted genitourinary symptoms, 4 tools predicted erectile dysfunction, and no tools predicted quality of life. For patients treated with external beam radiation therapy, 3 tools could be recommended for the prediction of rectal toxicity, gastrointestinal toxicity, and erectile dysfunction. For patients treated with brachytherapy, 2 tools could be recommended for the prediction of urinary retention and erectile dysfunction. Conclusions: A large number of tools for the prediction of PROMs in prostate cancer patients treated with radiation therapy have been developed. Only a small minority are accurate and have been shown to be generalizable through external validation. This review provides an accessible catalogue of tools that are ready for clinical implementation as well as which should be prioritized for validation.« less
Overview: What's Worked and What Hasn't as a Guide towards Predictive Admissions Tool Development
ERIC Educational Resources Information Center
Siu, Eric; Reiter, Harold I.
2009-01-01
Admissions committees and researchers around the globe have used diligence and imagination to develop and implement various screening measures with the ultimate goal of predicting future clinical and professional performance. What works for predicting future job performance in the human resources world and in most of the academic world may not,…
Huysentruyt, Koen; Devreker, Thierry; Dejonckheere, Joachim; De Schepper, Jean; Vandenplas, Yvan; Cools, Filip
2015-08-01
The aim of the present study was to evaluate the predictive accuracy of screening tools for assessing nutritional risk in hospitalized children in developed countries. The study involved a systematic review of literature (MEDLINE, EMBASE, and Cochrane Central databases up to January 17, 2014) of studies on the diagnostic performance of pediatric nutritional screening tools. Methodological quality was assessed using a modified QUADAS tool. Sensitivity and specificity were calculated for each screening tool per validation method. A meta-analysis was performed to estimate the risk ratio of different screening result categories of being truly at nutritional risk. A total of 11 studies were included on ≥1 of the following screening tools: Pediatric Nutritional Risk Score, Screening Tool for the Assessment of Malnutrition in Paediatrics, Paediatric Yorkhill Malnutrition Score, and Screening Tool for Risk on Nutritional Status and Growth. Because of variation in reference standards, a direct comparison of the predictive accuracy of the screening tools was not possible. A meta-analysis was performed on 1629 children from 7 different studies. The risk ratio of being truly at nutritional risk was 0.349 (95% confidence interval [CI] 0.16-0.78) for children in the low versus moderate screening category and 0.292 (95% CI 0.19-0.44) in the moderate versus high screening category. There is insufficient evidence to choose 1 nutritional screening tool over another based on their predictive accuracy. The estimated risk of being at "true nutritional risk" increases with each category of screening test result. Each screening category should be linked to a specific course of action, although further research is needed.
NASA Astrophysics Data System (ADS)
Manzoor Hussain, M.; Pitchi Raju, V.; Kandasamy, J.; Govardhan, D.
2018-04-01
Friction surface treatment is well-established solid technology and is used for deposition, abrasion and corrosion protection coatings on rigid materials. This novel process has wide range of industrial applications, particularly in the field of reclamation and repair of damaged and worn engineering components. In this paper, we present the prediction of tensile and shear strength of friction surface treated tool steel using ANN for simulated results of friction surface treatment. This experiment was carried out to obtain tool steel coatings of low carbon steel parts by changing contribution process parameters essentially friction pressure, rotational speed and welding speed. The simulation is performed by a 33-factor design that takes into account the maximum and least limits of the experimental work performed with the 23-factor design. Neural network structures, such as the Feed Forward Neural Network (FFNN), were used to predict tensile and shear strength of tool steel sediments caused by friction.
Modeling and performance assessment in QinetiQ of EO and IR airborne reconnaissance systems
NASA Astrophysics Data System (ADS)
Williams, John W.; Potter, Gary E.
2002-11-01
QinetiQ are the technical authority responsible for specifying the performance requirements for the procurement of airborne reconnaissance systems, on behalf of the UK MoD. They are also responsible for acceptance of delivered systems, overseeing and verifying the installed system performance as predicted and then assessed by the contractor. Measures of functional capability are central to these activities. The conduct of these activities utilises the broad technical insight and wide range of analysis tools and models available within QinetiQ. This paper focuses on the tools, methods and models that are applicable to systems based on EO and IR sensors. The tools, methods and models are described, and representative output for systems that QinetiQ has been responsible for is presented. The principle capability applicable to EO and IR airborne reconnaissance systems is the STAR (Simulation Tools for Airborne Reconnaissance) suite of models. STAR generates predictions of performance measures such as GRD (Ground Resolved Distance) and GIQE (General Image Quality) NIIRS (National Imagery Interpretation Rating Scales). It also generates images representing sensor output, using the scene generation software CAMEO-SIM and the imaging sensor model EMERALD. The simulated image 'quality' is fully correlated with the predicted non-imaging performance measures. STAR also generates image and table data that is compliant with STANAG 7023, which may be used to test ground station functionality.
ClubSub-P: Cluster-Based Subcellular Localization Prediction for Gram-Negative Bacteria and Archaea
Paramasivam, Nagarajan; Linke, Dirk
2011-01-01
The subcellular localization (SCL) of proteins provides important clues to their function in a cell. In our efforts to predict useful vaccine targets against Gram-negative bacteria, we noticed that misannotated start codons frequently lead to wrongly assigned SCLs. This and other problems in SCL prediction, such as the relatively high false-positive and false-negative rates of some tools, can be avoided by applying multiple prediction tools to groups of homologous proteins. Here we present ClubSub-P, an online database that combines existing SCL prediction tools into a consensus pipeline from more than 600 proteomes of fully sequenced microorganisms. On top of the consensus prediction at the level of single sequences, the tool uses clusters of homologous proteins from Gram-negative bacteria and from Archaea to eliminate false-positive and false-negative predictions. ClubSub-P can assign the SCL of proteins from Gram-negative bacteria and Archaea with high precision. The database is searchable, and can easily be expanded using either new bacterial genomes or new prediction tools as they become available. This will further improve the performance of the SCL prediction, as well as the detection of misannotated start codons and other annotation errors. ClubSub-P is available online at http://toolkit.tuebingen.mpg.de/clubsubp/ PMID:22073040
Real-time prediction of the occurrence of GLE events
NASA Astrophysics Data System (ADS)
Núñez, Marlon; Reyes-Santiago, Pedro J.; Malandraki, Olga E.
2017-07-01
A tool for predicting the occurrence of Ground Level Enhancement (GLE) events using the UMASEP scheme is presented. This real-time tool, called HESPERIA UMASEP-500, is based on the detection of the magnetic connection, along which protons arrive in the near-Earth environment, by estimating the lag correlation between the time derivatives of 1 min soft X-ray flux (SXR) and 1 min near-Earth proton fluxes observed by the GOES satellites. Unlike current GLE warning systems, this tool can predict GLE events before the detection by any neutron monitor (NM) station. The prediction performance measured for the period from 1986 to 2016 is presented for two consecutive periods, because of their notable difference in performance. For the 2000-2016 period, this prediction tool obtained a probability of detection (POD) of 53.8% (7 of 13 GLE events), a false alarm ratio (FAR) of 30.0%, and average warning times (AWT) of 8 min with respect to the first NM station's alert and 15 min to the GLE Alert Plus's warning. We have tested the model by replacing the GOES proton data with SOHO/EPHIN proton data, and the results are similar in terms of POD, FAR, and AWT for the same period. The paper also presents a comparison with a GLE warning system.
Fazel, Seena; Singh, Jay P; Doll, Helen; Grann, Martin
2012-07-24
To investigate the predictive validity of tools commonly used to assess the risk of violence, sexual, and criminal behaviour. Systematic review and tabular meta-analysis of replication studies following PRISMA guidelines. PsycINFO, Embase, Medline, and United States Criminal Justice Reference Service Abstracts. We included replication studies from 1 January 1995 to 1 January 2011 if they provided contingency data for the offending outcome that the tools were designed to predict. We calculated the diagnostic odds ratio, sensitivity, specificity, area under the curve, positive predictive value, negative predictive value, the number needed to detain to prevent one offence, as well as a novel performance indicator-the number safely discharged. We investigated potential sources of heterogeneity using metaregression and subgroup analyses. Risk assessments were conducted on 73 samples comprising 24,847 participants from 13 countries, of whom 5879 (23.7%) offended over an average of 49.6 months. When used to predict violent offending, risk assessment tools produced low to moderate positive predictive values (median 41%, interquartile range 27-60%) and higher negative predictive values (91%, 81-95%), and a corresponding median number needed to detain of 2 (2-4) and number safely discharged of 10 (4-18). Instruments designed to predict violent offending performed better than those aimed at predicting sexual or general crime. Although risk assessment tools are widely used in clinical and criminal justice settings, their predictive accuracy varies depending on how they are used. They seem to identify low risk individuals with high levels of accuracy, but their use as sole determinants of detention, sentencing, and release is not supported by the current evidence. Further research is needed to examine their contribution to treatment and management.
Decision-making tools in prostate cancer: from risk grouping to nomograms.
Fontanella, Paolo; Benecchi, Luigi; Grasso, Angelica; Patel, Vipul; Albala, David; Abbou, Claude; Porpiglia, Francesco; Sandri, Marco; Rocco, Bernardo; Bianchi, Giampaolo
2017-12-01
Prostate cancer (PCa) is the most common solid neoplasm and the second leading cause of cancer death in men. After the Partin tables were developed, a number of predictive and prognostic tools became available for risk stratification. These tools have allowed the urologist to better characterize this disease and lead to more confident treatment decisions for patients. The purpose of this study is to critically review the decision-making tools currently available to the urologist, from the moment when PCa is first diagnosed until patients experience metastatic progression and death. A systematic and critical analysis through Medline, EMBASE, Scopus and Web of Science databases was carried out in February 2016 as per the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. The search was conducted using the following key words: "prostate cancer," "prediction tools," "nomograms." Seventy-two studies were identified in the literature search. We summarized the results into six sections: Tools for prediction of life expectancy (before treatment), Tools for prediction of pathological stage (before treatment), Tools for prediction of survival and cancer-specific mortality (before/after treatment), Tools for prediction of biochemical recurrence (before/after treatment), Tools for prediction of metastatic progression (after treatment) and in the last section biomarkers and genomics. The management of PCa patients requires a tailored approach to deliver a truly personalized treatment. The currently available tools are of great help in helping the urologist in the decision-making process. These tests perform very well in high-grade and low-grade disease, while for intermediate-grade disease further research is needed. Newly discovered markers, genomic tests, and advances in imaging acquisition through mpMRI will help in instilling confidence that the appropriate treatments are being offered to patients with prostate cancer.
Konc, Janez; Janežič, Dušanka
2017-09-01
ProBiS (Protein Binding Sites) Tools consist of algorithm, database, and web servers for prediction of binding sites and protein ligands based on the detection of structurally similar binding sites in the Protein Data Bank. In this article, we review the operations that ProBiS Tools perform, provide comments on the evolution of the tools, and give some implementation details. We review some of its applications to biologically interesting proteins. ProBiS Tools are freely available at http://probis.cmm.ki.si and http://probis.nih.gov. Copyright © 2017 Elsevier Ltd. All rights reserved.
Updating Risk Prediction Tools: A Case Study in Prostate Cancer
Ankerst, Donna P.; Koniarski, Tim; Liang, Yuanyuan; Leach, Robin J.; Feng, Ziding; Sanda, Martin G.; Partin, Alan W.; Chan, Daniel W; Kagan, Jacob; Sokoll, Lori; Wei, John T; Thompson, Ian M.
2013-01-01
Online risk prediction tools for common cancers are now easily accessible and widely used by patients and doctors for informed decision-making concerning screening and diagnosis. A practical problem is as cancer research moves forward and new biomarkers and risk factors are discovered, there is a need to update the risk algorithms to include them. Typically the new markers and risk factors cannot be retrospectively measured on the same study participants used to develop the original prediction tool, necessitating the merging of a separate study of different participants, which may be much smaller in sample size and of a different design. Validation of the updated tool on a third independent data set is warranted before the updated tool can go online. This article reports on the application of Bayes rule for updating risk prediction tools to include a set of biomarkers measured in an external study to the original study used to develop the risk prediction tool. The procedure is illustrated in the context of updating the online Prostate Cancer Prevention Trial Risk Calculator to incorporate the new markers %freePSA and [−2]proPSA measured on an external case control study performed in Texas, U.S.. Recent state-of-the art methods in validation of risk prediction tools and evaluation of the improvement of updated to original tools are implemented using an external validation set provided by the U.S. Early Detection Research Network. PMID:22095849
Updating risk prediction tools: a case study in prostate cancer.
Ankerst, Donna P; Koniarski, Tim; Liang, Yuanyuan; Leach, Robin J; Feng, Ziding; Sanda, Martin G; Partin, Alan W; Chan, Daniel W; Kagan, Jacob; Sokoll, Lori; Wei, John T; Thompson, Ian M
2012-01-01
Online risk prediction tools for common cancers are now easily accessible and widely used by patients and doctors for informed decision-making concerning screening and diagnosis. A practical problem is as cancer research moves forward and new biomarkers and risk factors are discovered, there is a need to update the risk algorithms to include them. Typically, the new markers and risk factors cannot be retrospectively measured on the same study participants used to develop the original prediction tool, necessitating the merging of a separate study of different participants, which may be much smaller in sample size and of a different design. Validation of the updated tool on a third independent data set is warranted before the updated tool can go online. This article reports on the application of Bayes rule for updating risk prediction tools to include a set of biomarkers measured in an external study to the original study used to develop the risk prediction tool. The procedure is illustrated in the context of updating the online Prostate Cancer Prevention Trial Risk Calculator to incorporate the new markers %freePSA and [-2]proPSA measured on an external case-control study performed in Texas, U.S.. Recent state-of-the art methods in validation of risk prediction tools and evaluation of the improvement of updated to original tools are implemented using an external validation set provided by the U.S. Early Detection Research Network. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mbeutcha, Aurélie; Mathieu, Romain; Rouprêt, Morgan; Gust, Kilian M; Briganti, Alberto; Karakiewicz, Pierre I; Shariat, Shahrokh F
2016-10-01
In the context of customized patient care for upper tract urothelial carcinoma (UTUC), decision-making could be facilitated by risk assessment and prediction tools. The aim of this study was to provide a critical overview of existing predictive models and to review emerging promising prognostic factors for UTUC. A literature search of articles published in English from January 2000 to June 2016 was performed using PubMed. Studies on risk group stratification models and predictive tools in UTUC were selected, together with studies on predictive factors and biomarkers associated with advanced-stage UTUC and oncological outcomes after surgery. Various predictive tools have been described for advanced-stage UTUC assessment, disease recurrence and cancer-specific survival (CSS). Most of these models are based on well-established prognostic factors such as tumor stage, grade and lymph node (LN) metastasis, but some also integrate newly described prognostic factors and biomarkers. These new prediction tools seem to reach a high level of accuracy, but they lack external validation and decision-making analysis. The combinations of patient-, pathology- and surgery-related factors together with novel biomarkers have led to promising predictive tools for oncological outcomes in UTUC. However, external validation of these predictive models is a prerequisite before their introduction into daily practice. New models predicting response to therapy are urgently needed to allow accurate and safe individualized management in this heterogeneous disease.
Computer assisted blast design and assessment tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, A.R.; Kleine, T.H.; Forsyth, W.W.
1995-12-31
In general the software required by a blast designer includes tools that graphically present blast designs (surface and underground), can analyze a design or predict its result, and can assess blasting results. As computers develop and computer literacy continues to rise the development of and use of such tools will spread. An example of the tools that are becoming available includes: Automatic blast pattern generation and underground ring design; blast design evaluation in terms of explosive distribution and detonation simulation; fragmentation prediction; blast vibration prediction and minimization; blast monitoring for assessment of dynamic performance; vibration measurement, display and signal processing;more » evaluation of blast results in terms of fragmentation; and risk and reliability based blast assessment. The authors have identified a set of criteria that are essential in choosing appropriate software blasting tools.« less
ERIC Educational Resources Information Center
Jamil, Faiza M.; Sabol, Terri J.; Hamre, Bridget K.; Pianta, Robert C.
2015-01-01
Contemporary education reforms focus on assessing teachers' performance and developing selection mechanisms for hiring effective teachers. Tools that enable the prediction of teachers' classroom performance promote schools' ability to hire teachers more likely to be successful in the classroom. In addition, these assessment tools can be used for…
Deriving the polarization behavior of many-layer mirror coatings
NASA Astrophysics Data System (ADS)
White, Amanda J.; Harrington, David M.; Sueoka, Stacey R.
2018-06-01
End-to-end models of astronomical instrument performance are becoming commonplace to demonstrate feasibility and guarantee performance at large observatories. Astronomical techniques like adaptive optics and high contrast imaging have made great strides towards making detailed performance predictions, however, for polarimetric techniques, fundamental tools for predicting performance do not exist. One big missing piece is predicting the wavelength and field of view dependence of a many-mirror articulated optical system particularly with complex protected metal coatings. Predicting polarization performance of instruments requires combining metrology of mirror coatings, tools to create mirror coating models, and optical modeling software for polarized beam propagation. The inability to predict instrument induced polarization or to define polarization performance expectations has far reaching implications for up and coming major observatories, such as the Daniel K. Inouye Solar Telescope (DKIST), that aim to take polarization measurements at unprecedented sensitivity and resolution.Here we present a method for modelling the wavelength dependent refractive index of an optic using Berreman calculus - a mathematical formalism that describes how an electromagnetic field propagates through a birefringent medium. From Berreman calculus, we can better predict the Mueller matrix, diattenuation, and retardance of an arbitrary thicknesses of amorphous many-layer coatings as well as stacks of birefringent crystals from laboratory measurements. This will allow for the wavelength dependent refractive index to be accurately determined and the polarization behavior to be derived for a given optic.
iPat: intelligent prediction and association tool for genomic research.
Chen, Chunpeng James; Zhang, Zhiwu
2018-06-01
The ultimate goal of genomic research is to effectively predict phenotypes from genotypes so that medical management can improve human health and molecular breeding can increase agricultural production. Genomic prediction or selection (GS) plays a complementary role to genome-wide association studies (GWAS), which is the primary method to identify genes underlying phenotypes. Unfortunately, most computing tools cannot perform data analyses for both GWAS and GS. Furthermore, the majority of these tools are executed through a command-line interface (CLI), which requires programming skills. Non-programmers struggle to use them efficiently because of the steep learning curves and zero tolerance for data formats and mistakes when inputting keywords and parameters. To address these problems, this study developed a software package, named the Intelligent Prediction and Association Tool (iPat), with a user-friendly graphical user interface. With iPat, GWAS or GS can be performed using a pointing device to simply drag and/or click on graphical elements to specify input data files, choose input parameters and select analytical models. Models available to users include those implemented in third party CLI packages such as GAPIT, PLINK, FarmCPU, BLINK, rrBLUP and BGLR. Users can choose any data format and conduct analyses with any of these packages. File conversions are automatically conducted for specified input data and selected packages. A GWAS-assisted genomic prediction method was implemented to perform genomic prediction using any GWAS method such as FarmCPU. iPat was written in Java for adaptation to multiple operating systems including Windows, Mac and Linux. The iPat executable file, user manual, tutorials and example datasets are freely available at http://zzlab.net/iPat. zhiwu.zhang@wsu.edu.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Etingov, Pavel; Makarov, PNNL Yuri; Subbarao, PNNL Kris
RUT software is designed for use by the Balancing Authorities to predict and display additional requirements caused by the variability and uncertainty in load and generation. The prediction is made for the next operating hours as well as for the next day. The tool predicts possible deficiencies in generation capability and ramping capability. This deficiency of balancing resources can cause serious risks to power system stability and also impact real-time market energy prices. The tool dynamically and adaptively correlates changing system conditions with the additional balancing needs triggered by the interplay between forecasted and actual load and output of variablemore » resources. The assessment is performed using a specially developed probabilistic algorithm incorporating multiple sources of uncertainty including wind, solar and load forecast errors. The tool evaluates required generation for a worst case scenario, with a user-specified confidence level.« less
Development and evaluation of a predictive algorithm for telerobotic task complexity
NASA Technical Reports Server (NTRS)
Gernhardt, M. L.; Hunter, R. C.; Hedgecock, J. C.; Stephenson, A. G.
1993-01-01
There is a wide range of complexity in the various telerobotic servicing tasks performed in subsea, space, and hazardous material handling environments. Experience with telerobotic servicing has evolved into a knowledge base used to design tasks to be 'telerobot friendly.' This knowledge base generally resides in a small group of people. Written documentation and requirements are limited in conveying this knowledge base to serviceable equipment designers and are subject to misinterpretation. A mathematical model of task complexity based on measurable task parameters and telerobot performance characteristics would be a valuable tool to designers and operational planners. Oceaneering Space Systems and TRW have performed an independent research and development project to develop such a tool for telerobotic orbital replacement unit (ORU) exchange. This algorithm was developed to predict an ORU exchange degree of difficulty rating (based on the Cooper-Harper rating used to assess piloted operations). It is based on measurable parameters of the ORU, attachment receptacle and quantifiable telerobotic performance characteristics (e.g., link length, joint ranges, positional accuracy, tool lengths, number of cameras, and locations). The resulting algorithm can be used to predict task complexity as the ORU parameters, receptacle parameters, and telerobotic characteristics are varied.
Hybrid ABC Optimized MARS-Based Modeling of the Milling Tool Wear from Milling Run Experimental Data
García Nieto, Paulino José; García-Gonzalo, Esperanza; Ordóñez Galán, Celestino; Bernardo Sánchez, Antonio
2016-01-01
Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC–MARS-based model was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC–MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed. PMID:28787882
García Nieto, Paulino José; García-Gonzalo, Esperanza; Ordóñez Galán, Celestino; Bernardo Sánchez, Antonio
2016-01-28
Milling cutters are important cutting tools used in milling machines to perform milling operations, which are prone to wear and subsequent failure. In this paper, a practical new hybrid model to predict the milling tool wear in a regular cut, as well as entry cut and exit cut, of a milling tool is proposed. The model was based on the optimization tool termed artificial bee colony (ABC) in combination with multivariate adaptive regression splines (MARS) technique. This optimization mechanism involved the parameter setting in the MARS training procedure, which significantly influences the regression accuracy. Therefore, an ABC-MARS-based model was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc . Regression with optimal hyperparameters was performed and a determination coefficient of 0.94 was obtained. The ABC-MARS-based model's goodness of fit to experimental data confirmed the good performance of this model. This new model also allowed us to ascertain the most influential parameters on the milling tool flank wear with a view to proposing milling machine's improvements. Finally, conclusions of this study are exposed.
Model Performance Evaluation and Scenario Analysis (MPESA)
Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)
Wen, Ping-Ping; Shi, Shao-Ping; Xu, Hao-Dong; Wang, Li-Na; Qiu, Jian-Ding
2016-10-15
As one of the most important reversible types of post-translational modification, protein methylation catalyzed by methyltransferases carries many pivotal biological functions as well as many essential biological processes. Identification of methylation sites is prerequisite for decoding methylation regulatory networks in living cells and understanding their physiological roles. Experimental methods are limitations of labor-intensive and time-consuming. While in silicon approaches are cost-effective and high-throughput manner to predict potential methylation sites, but those previous predictors only have a mixed model and their prediction performances are not fully satisfactory now. Recently, with increasing availability of quantitative methylation datasets in diverse species (especially in eukaryotes), there is a growing need to develop a species-specific predictor. Here, we designed a tool named PSSMe based on information gain (IG) feature optimization method for species-specific methylation site prediction. The IG method was adopted to analyze the importance and contribution of each feature, then select the valuable dimension feature vectors to reconstitute a new orderly feature, which was applied to build the finally prediction model. Finally, our method improves prediction performance of accuracy about 15% comparing with single features. Furthermore, our species-specific model significantly improves the predictive performance compare with other general methylation prediction tools. Hence, our prediction results serve as useful resources to elucidate the mechanism of arginine or lysine methylation and facilitate hypothesis-driven experimental design and validation. The tool online service is implemented by C# language and freely available at http://bioinfo.ncu.edu.cn/PSSMe.aspx CONTACT: jdqiu@ncu.edu.cnSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Analysis Tools for CFD Multigrid Solvers
NASA Technical Reports Server (NTRS)
Mineck, Raymond E.; Thomas, James L.; Diskin, Boris
2004-01-01
Analysis tools are needed to guide the development and evaluate the performance of multigrid solvers for the fluid flow equations. Classical analysis tools, such as local mode analysis, often fail to accurately predict performance. Two-grid analysis tools, herein referred to as Idealized Coarse Grid and Idealized Relaxation iterations, have been developed and evaluated within a pilot multigrid solver. These new tools are applicable to general systems of equations and/or discretizations and point to problem areas within an existing multigrid solver. Idealized Relaxation and Idealized Coarse Grid are applied in developing textbook-efficient multigrid solvers for incompressible stagnation flow problems.
Greased Lightning (GL-10) Performance Flight Research: Flight Data Report
NASA Technical Reports Server (NTRS)
McSwain, Robert G.; Glaab, Louis J.; Theodore, Colin R.; Rhew, Ray D. (Editor); North, David D. (Editor)
2017-01-01
Modern aircraft design methods have produced acceptable designs for large conventional aircraft performance. With revolutionary electronic propulsion technologies fueled by the growth in the small UAS (Unmanned Aerial Systems) industry, these same prediction models are being applied to new smaller, and experimental design concepts requiring a VTOL (Vertical Take Off and Landing) capability for ODM (On Demand Mobility). A 50% sub-scale GL-10 flight model was built and tested to demonstrate the transition from hover to forward flight utilizing DEP (Distributed Electric Propulsion)[1][2]. In 2016 plans were put in place to conduct performance flight testing on the 50% sub-scale GL-10 flight model to support a NASA project called DELIVER (Design Environment for Novel Vertical Lift Vehicles). DELIVER was investigating the feasibility of including smaller and more experimental aircraft configurations into a NASA design tool called NDARC (NASA Design and Analysis of Rotorcraft)[3]. This report covers the performance flight data collected during flight testing of the GL-10 50% sub-scale flight model conducted at Beaver Dam Airpark, VA. Overall the flight test data provides great insight into how well our existing conceptual design tools predict the performance of small scale experimental DEP concepts. Low fidelity conceptual design tools estimated the (L/D)( sub max)of the GL-10 50% sub-scale flight model to be 16. Experimentally measured (L/D)( sub max) for the GL-10 50% scale flight model was 7.2. The aerodynamic performance predicted versus measured highlights the complexity of wing and nacelle interactions which is not currently accounted for in existing low fidelity tools.
Force Modelling in Orthogonal Cutting Considering Flank Wear Effect
NASA Astrophysics Data System (ADS)
Rathod, Kanti Bhikhubhai; Lalwani, Devdas I.
2017-05-01
In the present work, an attempt has been made to provide a predictive cutting force model during orthogonal cutting by combining two different force models, that is, a force model for a perfectly sharp tool plus considering the effect of edge radius and a force model for a worn tool. The first force model is for a perfectly sharp tool that is based on Oxley's predictive machining theory for orthogonal cutting as the Oxley's model is for perfectly sharp tool, the effect of cutting edge radius (hone radius) is added and improve model is presented. The second force model is based on worn tool (flank wear) that was proposed by Waldorf. Further, the developed combined force model is also used to predict flank wear width using inverse approach. The performance of the developed combined total force model is compared with the previously published results for AISI 1045 and AISI 4142 materials and found reasonably good agreement.
Admissions Roulette: Predictive Factors for Success in Practice
ERIC Educational Resources Information Center
Pfouts, Jane H.; Henley, H. Carl, Jr.
1977-01-01
A multivariate predictive index of student field performance to be used as an admissions tool in graduate schools of social work is described. It measures the effect on field performance of (1) a measure of the student's intellectual ability, (2) undergraduate school quality, (3) prior work experience, and (4) student sex. (Author/LBH)
A Decision Support Prototype Tool for Predicting Student Performance in an ODL Environment
ERIC Educational Resources Information Center
Kotsiantis, S. B.; Pintelas, P. E.
2004-01-01
Machine Learning algorithms fed with data sets which include information such as attendance data, test scores and other student information can provide tutors with powerful tools for decision-making. Until now, much of the research has been limited to the relation between single variables and student performance. Combining multiple variables as…
A Business Analytics Software Tool for Monitoring and Predicting Radiology Throughput Performance.
Jones, Stephen; Cournane, Seán; Sheehy, Niall; Hederman, Lucy
2016-12-01
Business analytics (BA) is increasingly being utilised by radiology departments to analyse and present data. It encompasses statistical analysis, forecasting and predictive modelling and is used as an umbrella term for decision support and business intelligence systems. The primary aim of this study was to determine whether utilising BA technologies could contribute towards improved decision support and resource management within radiology departments. A set of information technology requirements were identified with key stakeholders, and a prototype BA software tool was designed, developed and implemented. A qualitative evaluation of the tool was carried out through a series of semi-structured interviews with key stakeholders. Feedback was collated, and emergent themes were identified. The results indicated that BA software applications can provide visibility of radiology performance data across all time horizons. The study demonstrated that the tool could potentially assist with improving operational efficiencies and management of radiology resources.
Girardat-Rotar, Laura; Braun, Julia; Puhan, Milo A; Abraham, Alison G; Serra, Andreas L
2017-07-17
Prediction models in autosomal dominant polycystic kidney disease (ADPKD) are useful in clinical settings to identify patients with greater risk of a rapid disease progression in whom a treatment may have more benefits than harms. Mayo Clinic investigators developed a risk prediction tool for ADPKD patients using a single kidney value. Our aim was to perform an independent geographical and temporal external validation as well as evaluate the potential for improving the predictive performance by including additional information on total kidney volume. We used data from the on-going Swiss ADPKD study from 2006 to 2016. The main analysis included a sample size of 214 patients with Typical ADPKD (Class 1). We evaluated the Mayo Clinic model performance calibration and discrimination in our external sample and assessed whether predictive performance could be improved through the addition of subsequent kidney volume measurements beyond the baseline assessment. The calibration of both versions of the Mayo Clinic prediction model using continuous Height adjusted total kidney volume (HtTKV) and using risk subclasses was good, with R 2 of 78% and 70%, respectively. Accuracy was also good with 91.5% and 88.7% of the predicted within 30% of the observed, respectively. Additional information regarding kidney volume did not substantially improve the model performance. The Mayo Clinic prediction models are generalizable to other clinical settings and provide an accurate tool based on available predictors to identify patients at high risk for rapid disease progression.
Lee, Ciaran M; Davis, Timothy H; Bao, Gang
2018-04-01
What is the topic of this review? In this review, we analyse the performance of recently described tools for CRISPR/Cas9 guide RNA design, in particular, design tools that predict CRISPR/Cas9 activity. What advances does it highlight? Recently, many tools designed to predict CRISPR/Cas9 activity have been reported. However, the majority of these tools lack experimental validation. Our analyses indicate that these tools have poor predictive power. Our preliminary results suggest that target site accessibility should be considered in order to develop better guide RNA design tools with improved predictive power. The recent adaptation of the clustered regulatory interspaced short palindromic repeats (CRISPR)/CRISPR-associated protein 9 (Cas9) system for targeted genome engineering has led to its widespread application in many fields worldwide. In order to gain a better understanding of the design rules of CRISPR/Cas9 systems, several groups have carried out large library-based screens leading to some insight into sequence preferences among highly active target sites. To facilitate CRISPR/Cas9 design, these studies have spawned a plethora of guide RNA (gRNA) design tools with algorithms based solely on direct or indirect sequence features. Here, we demonstrate that the predictive power of these tools is poor, suggesting that sequence features alone cannot accurately inform the cutting efficiency of a particular CRISPR/Cas9 gRNA design. Furthermore, we demonstrate that DNA target site accessibility influences the activity of CRISPR/Cas9. With further optimization, we hypothesize that it will be possible to increase the predictive power of gRNA design tools by including both sequence and target site accessibility metrics. © 2017 The Authors. Experimental Physiology © 2017 The Physiological Society.
Gupta, Punkaj; Rettiganti, Mallikarjuna; Gossett, Jeffrey M; Daufeldt, Jennifer; Rice, Tom B; Wetzel, Randall C
2018-01-01
To create a novel tool to predict favorable neurologic outcomes during ICU stay among children with critical illness. Logistic regression models using adaptive lasso methodology were used to identify independent factors associated with favorable neurologic outcomes. A mixed effects logistic regression model was used to create the final prediction model including all predictors selected from the lasso model. Model validation was performed using a 10-fold internal cross-validation approach. Virtual Pediatric Systems (VPS, LLC, Los Angeles, CA) database. Patients less than 18 years old admitted to one of the participating ICUs in the Virtual Pediatric Systems database were included (2009-2015). None. A total of 160,570 patients from 90 hospitals qualified for inclusion. Of these, 1,675 patients (1.04%) were associated with a decline in Pediatric Cerebral Performance Category scale by at least 2 between ICU admission and ICU discharge (unfavorable neurologic outcome). The independent factors associated with unfavorable neurologic outcome included higher weight at ICU admission, higher Pediatric Index of Morality-2 score at ICU admission, cardiac arrest, stroke, seizures, head/nonhead trauma, use of conventional mechanical ventilation and high-frequency oscillatory ventilation, prolonged hospital length of ICU stay, and prolonged use of mechanical ventilation. The presence of chromosomal anomaly, cardiac surgery, and utilization of nitric oxide were associated with favorable neurologic outcome. The final online prediction tool can be accessed at https://soipredictiontool.shinyapps.io/GNOScore/. Our model predicted 139,688 patients with favorable neurologic outcomes in an internal validation sample when the observed number of patients with favorable neurologic outcomes was among 139,591 patients. The area under the receiver operating curve for the validation model was 0.90. This proposed prediction tool encompasses 20 risk factors into one probability to predict favorable neurologic outcome during ICU stay among children with critical illness. Future studies should seek external validation and improved discrimination of this prediction tool.
Discovering the Motivations of Students When Using an Online Learning Tool
ERIC Educational Resources Information Center
Saadé, Raafat George; Al Sharhan, Jamal
2015-01-01
In an educational setting, the use of online learning tools impacts student performance. Motivation and beliefs play an important role in predicting student decisions to use these learning tools. However, IT-personality entailing playfulness on the web, perceived personal innovativeness, and enjoyment may have an impact on motivations. In this…
A review of statistical updating methods for clinical prediction models.
Su, Ting-Li; Jaki, Thomas; Hickey, Graeme L; Buchan, Iain; Sperrin, Matthew
2018-01-01
A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods.
Park, Seong Ho; Han, Kyunghwa
2018-03-01
The use of artificial intelligence in medicine is currently an issue of great interest, especially with regard to the diagnostic or predictive analysis of medical images. Adoption of an artificial intelligence tool in clinical practice requires careful confirmation of its clinical utility. Herein, the authors explain key methodology points involved in a clinical evaluation of artificial intelligence technology for use in medicine, especially high-dimensional or overparameterized diagnostic or predictive models in which artificial deep neural networks are used, mainly from the standpoints of clinical epidemiology and biostatistics. First, statistical methods for assessing the discrimination and calibration performances of a diagnostic or predictive model are summarized. Next, the effects of disease manifestation spectrum and disease prevalence on the performance results are explained, followed by a discussion of the difference between evaluating the performance with use of internal and external datasets, the importance of using an adequate external dataset obtained from a well-defined clinical cohort to avoid overestimating the clinical performance as a result of overfitting in high-dimensional or overparameterized classification model and spectrum bias, and the essentials for achieving a more robust clinical evaluation. Finally, the authors review the role of clinical trials and observational outcome studies for ultimate clinical verification of diagnostic or predictive artificial intelligence tools through patient outcomes, beyond performance metrics, and how to design such studies. © RSNA, 2018.
A simple prediction tool for inhaled corticosteroid response in asthmatic children.
Wu, Yi-Fan; Su, Ming-Wei; Chiang, Bor-Luen; Yang, Yao-Hsu; Tsai, Ching-Hui; Lee, Yungling L
2017-12-07
Inhaled corticosteroids are recommended as the first-line controller medication for childhood asthma owing to their multiple clinical benefits. However, heterogeneity in the response towards these drugs remains a significant clinical problem. Children aged 5 to 18 years with mild to moderate persistent asthma were recruited into the Taiwanese Consortium of Childhood Asthma Study. Their responses to inhaled corticosteroids were assessed based on their improvements in the asthma control test and peak expiratory flow. The predictors of responsiveness were demographic and clinical features that were available in primary care settings. We have developed a prediction model using logistic regression and have simplified it to formulate a practical tool. We assessed its predictive performance using the area under the receiver operating characteristic curve. Of the 73 asthmatic children with baseline and follow-up outcome measurements for inhaled corticosteroids treatment, 24 (33%) were defined as non-responders. The tool we have developed consisted of three predictors yielding a total score between 0 and 5, which are comprised of the following parameters: the age at physician-diagnosis of asthma, sex, and exhaled nitric oxide. Sensitivity and specificity of the tool for prediction of inhaled corticosteroids non-responsiveness, for a score of 3, were 0.75 and 0.69, respectively. The areas under the receiver operating characteristic curve for the prediction tool was 0.763. Our prediction tool represents a simple and low-cost method for predicting the response of inhaled corticosteroids treatment in asthmatic children.
Griffiths, Alex; Beaussier, Anne-Laure; Demeritt, David; Rothstein, Henry
2017-02-01
The Care Quality Commission (CQC) is responsible for ensuring the quality of the health and social care delivered by more than 30 000 registered providers in England. With only limited resources for conducting on-site inspections, the CQC has used statistical surveillance tools to help it identify which providers it should prioritise for inspection. In the face of planned funding cuts, the CQC plans to put more reliance on statistical surveillance tools to assess risks to quality and prioritise inspections accordingly. To evaluate the ability of the CQC's latest surveillance tool, Intelligent Monitoring (IM), to predict the quality of care provided by National Health Service (NHS) hospital trusts so that those at greatest risk of providing poor-quality care can be identified and targeted for inspection. The predictive ability of the IM tool is evaluated through regression analyses and χ 2 testing of the relationship between the quantitative risk score generated by the IM tool and the subsequent quality rating awarded following detailed on-site inspection by large expert teams of inspectors. First, the continuous risk scores generated by the CQC's IM statistical surveillance tool cannot predict inspection-based quality ratings of NHS hospital trusts (OR 0.38 (0.14 to 1.05) for Outstanding/Good, OR 0.94 (0.80 to -1.10) for Good/Requires improvement, and OR 0.90 (0.76 to 1.07) for Requires improvement/Inadequate). Second, the risk scores cannot be used more simply to distinguish the trusts performing poorly-those subsequently rated either 'Requires improvement' or 'Inadequate'-from the trusts performing well-those subsequently rated either 'Good' or 'Outstanding' (OR 1.07 (0.91 to 1.26)). Classifying CQC's risk bandings 1-3 as high risk and 4-6 as low risk, 11 of the high risk trusts were performing well and 43 of the low risk trusts were performing poorly, resulting in an overall accuracy rate of 47.6%. Third, the risk scores cannot be used even more simply to distinguish the worst performing trusts-those subsequently rated 'Inadequate'-from the remaining, better performing trusts (OR 1.11 (0.94 to 1.32)). Classifying CQC's risk banding 1 as high risk and 2-6 as low risk, the highest overall accuracy rate of 72.8% was achieved, but still only 6 of the 13 Inadequate trusts were correctly classified as being high risk. Since the IM statistical surveillance tool cannot predict the outcome of NHS hospital trust inspections, it cannot be used for prioritisation. A new approach to inspection planning is therefore required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Hinge Moment Coefficient Prediction Tool and Control Force Analysis of Extra-300 Aerobatic Aircraft
NASA Astrophysics Data System (ADS)
Nurohman, Chandra; Arifianto, Ony; Barecasco, Agra
2018-04-01
This paper presents the development of tool that is applicable to predict hinge moment coefficients of subsonic aircraft based on Roskam’s method, including the validation and its application to predict hinge moment coefficient of an Extra-300. The hinge moment coefficients are used to predict the stick forces of the aircraft during several aerobatic maneuver i.e. inside loop, half cuban 8, split-s, and aileron roll. The maximum longitudinal stick force is 566.97 N occurs in inside loop while the maximum lateral stick force is 340.82 N occurs in aileron roll. Furthermore, validation hinge moment prediction method is performed using Cessna 172 data.
Singh, Jay P; Doll, Helen; Grann, Martin
2012-01-01
Objective To investigate the predictive validity of tools commonly used to assess the risk of violence, sexual, and criminal behaviour. Design Systematic review and tabular meta-analysis of replication studies following PRISMA guidelines. Data sources PsycINFO, Embase, Medline, and United States Criminal Justice Reference Service Abstracts. Review methods We included replication studies from 1 January 1995 to 1 January 2011 if they provided contingency data for the offending outcome that the tools were designed to predict. We calculated the diagnostic odds ratio, sensitivity, specificity, area under the curve, positive predictive value, negative predictive value, the number needed to detain to prevent one offence, as well as a novel performance indicator—the number safely discharged. We investigated potential sources of heterogeneity using metaregression and subgroup analyses. Results Risk assessments were conducted on 73 samples comprising 24 847 participants from 13 countries, of whom 5879 (23.7%) offended over an average of 49.6 months. When used to predict violent offending, risk assessment tools produced low to moderate positive predictive values (median 41%, interquartile range 27-60%) and higher negative predictive values (91%, 81-95%), and a corresponding median number needed to detain of 2 (2-4) and number safely discharged of 10 (4-18). Instruments designed to predict violent offending performed better than those aimed at predicting sexual or general crime. Conclusions Although risk assessment tools are widely used in clinical and criminal justice settings, their predictive accuracy varies depending on how they are used. They seem to identify low risk individuals with high levels of accuracy, but their use as sole determinants of detention, sentencing, and release is not supported by the current evidence. Further research is needed to examine their contribution to treatment and management. PMID:22833604
MiRduplexSVM: A High-Performing MiRNA-Duplex Prediction and Evaluation Methodology
Karathanasis, Nestoras; Tsamardinos, Ioannis; Poirazi, Panayiota
2015-01-01
We address the problem of predicting the position of a miRNA duplex on a microRNA hairpin via the development and application of a novel SVM-based methodology. Our method combines a unique problem representation and an unbiased optimization protocol to learn from mirBase19.0 an accurate predictive model, termed MiRduplexSVM. This is the first model that provides precise information about all four ends of the miRNA duplex. We show that (a) our method outperforms four state-of-the-art tools, namely MaturePred, MiRPara, MatureBayes, MiRdup as well as a Simple Geometric Locator when applied on the same training datasets employed for each tool and evaluated on a common blind test set. (b) In all comparisons, MiRduplexSVM shows superior performance, achieving up to a 60% increase in prediction accuracy for mammalian hairpins and can generalize very well on plant hairpins, without any special optimization. (c) The tool has a number of important applications such as the ability to accurately predict the miRNA or the miRNA*, given the opposite strand of a duplex. Its performance on this task is superior to the 2nts overhang rule commonly used in computational studies and similar to that of a comparative genomic approach, without the need for prior knowledge or the complexity of performing multiple alignments. Finally, it is able to evaluate novel, potential miRNAs found either computationally or experimentally. In relation with recent confidence evaluation methods used in miRBase, MiRduplexSVM was successful in identifying high confidence potential miRNAs. PMID:25961860
Mahmood, Khalid; Jung, Chol-Hee; Philip, Gayle; Georgeson, Peter; Chung, Jessica; Pope, Bernard J; Park, Daniel J
2017-05-16
Genetic variant effect prediction algorithms are used extensively in clinical genomics and research to determine the likely consequences of amino acid substitutions on protein function. It is vital that we better understand their accuracies and limitations because published performance metrics are confounded by serious problems of circularity and error propagation. Here, we derive three independent, functionally determined human mutation datasets, UniFun, BRCA1-DMS and TP53-TA, and employ them, alongside previously described datasets, to assess the pre-eminent variant effect prediction tools. Apparent accuracies of variant effect prediction tools were influenced significantly by the benchmarking dataset. Benchmarking with the assay-determined datasets UniFun and BRCA1-DMS yielded areas under the receiver operating characteristic curves in the modest ranges of 0.52 to 0.63 and 0.54 to 0.75, respectively, considerably lower than observed for other, potentially more conflicted datasets. These results raise concerns about how such algorithms should be employed, particularly in a clinical setting. Contemporary variant effect prediction tools are unlikely to be as accurate at the general prediction of functional impacts on proteins as reported prior. Use of functional assay-based datasets that avoid prior dependencies promises to be valuable for the ongoing development and accurate benchmarking of such tools.
Predicting falls in older adults using the four square step test.
Cleary, Kimberly; Skornyakov, Elena
2017-10-01
The Four Square Step Test (FSST) is a performance-based balance tool involving stepping over four single-point canes placed on the floor in a cross configuration. The purpose of this study was to evaluate properties of the FSST in older adults who lived independently. Forty-five community dwelling older adults provided fall history and completed the FSST, Berg Balance Scale (BBS), Timed Up and Go (TUG), and Tinetti in random order. Future falls were recorded for 12 months following testing. The FSST accurately distinguished between non-fallers and multiple fallers, and the 15-second threshold score accurately distinguished multiple fallers from non-multiple fallers based on fall history. The FSST predicted future falls, and performance on the FSST was significantly correlated with performance on the BBS, TUG, and Tinetti. However, the test is not appropriate for older adults who use walkers. Overall, the FSST is a valid yet underutilized measure of balance performance and fall prediction tool that physical therapists should consider using in ambulatory community dwelling older adults.
NASA Astrophysics Data System (ADS)
Murrill, Steven R.; Jacobs, Eddie L.; Franck, Charmaine C.; Petkie, Douglas T.; De Lucia, Frank C.
2015-10-01
The U.S. Army Research Laboratory (ARL) has continued to develop and enhance a millimeter-wave (MMW) and submillimeter- wave (SMMW)/terahertz (THz)-band imaging system performance prediction and analysis tool for both the detection and identification of concealed weaponry, and for pilotage obstacle avoidance. The details of the MATLAB-based model which accounts for the effects of all critical sensor and display components, for the effects of atmospheric attenuation, concealment material attenuation, and active illumination, were reported on at the 2005 SPIE Europe Security and Defence Symposium (Brugge). An advanced version of the base model that accounts for both the dramatic impact that target and background orientation can have on target observability as related to specular and Lambertian reflections captured by an active-illumination-based imaging system, and for the impact of target and background thermal emission, was reported on at the 2007 SPIE Defense and Security Symposium (Orlando). Further development of this tool that includes a MODTRAN-based atmospheric attenuation calculator and advanced system architecture configuration inputs that allow for straightforward performance analysis of active or passive systems based on scanning (single- or line-array detector element(s)) or staring (focal-plane-array detector elements) imaging architectures was reported on at the 2011 SPIE Europe Security and Defence Symposium (Prague). This paper provides a comprehensive review of a newly enhanced MMW and SMMW/THz imaging system analysis and design tool that now includes an improved noise sub-model for more accurate and reliable performance predictions, the capability to account for postcapture image contrast enhancement, and the capability to account for concealment material backscatter with active-illumination- based systems. Present plans for additional expansion of the model's predictive capabilities are also outlined.
ERIC Educational Resources Information Center
Arikan, Serkan
2014-01-01
There are many studies that focus on factors affecting achievement. However, there is limited research that used student characteristics indices reported by the Programme for International Student Assessment (PISA). Therefore, this study investigated the predictive effects of student characteristics on mathematics performance of Turkish students.…
An ensemble model of QSAR tools for regulatory risk assessment.
Pradeep, Prachi; Povinelli, Richard J; White, Shannon; Merrill, Stephen J
2016-01-01
Quantitative structure activity relationships (QSARs) are theoretical models that relate a quantitative measure of chemical structure to a physical property or a biological effect. QSAR predictions can be used for chemical risk assessment for protection of human and environmental health, which makes them interesting to regulators, especially in the absence of experimental data. For compatibility with regulatory use, QSAR models should be transparent, reproducible and optimized to minimize the number of false negatives. In silico QSAR tools are gaining wide acceptance as a faster alternative to otherwise time-consuming clinical and animal testing methods. However, different QSAR tools often make conflicting predictions for a given chemical and may also vary in their predictive performance across different chemical datasets. In a regulatory context, conflicting predictions raise interpretation, validation and adequacy concerns. To address these concerns, ensemble learning techniques in the machine learning paradigm can be used to integrate predictions from multiple tools. By leveraging various underlying QSAR algorithms and training datasets, the resulting consensus prediction should yield better overall predictive ability. We present a novel ensemble QSAR model using Bayesian classification. The model allows for varying a cut-off parameter that allows for a selection in the desirable trade-off between model sensitivity and specificity. The predictive performance of the ensemble model is compared with four in silico tools (Toxtree, Lazar, OECD Toolbox, and Danish QSAR) to predict carcinogenicity for a dataset of air toxins (332 chemicals) and a subset of the gold carcinogenic potency database (480 chemicals). Leave-one-out cross validation results show that the ensemble model achieves the best trade-off between sensitivity and specificity (accuracy: 83.8 % and 80.4 %, and balanced accuracy: 80.6 % and 80.8 %) and highest inter-rater agreement [kappa ( κ ): 0.63 and 0.62] for both the datasets. The ROC curves demonstrate the utility of the cut-off feature in the predictive ability of the ensemble model. This feature provides an additional control to the regulators in grading a chemical based on the severity of the toxic endpoint under study.
An ensemble model of QSAR tools for regulatory risk assessment
Pradeep, Prachi; Povinelli, Richard J.; White, Shannon; ...
2016-09-22
Quantitative structure activity relationships (QSARs) are theoretical models that relate a quantitative measure of chemical structure to a physical property or a biological effect. QSAR predictions can be used for chemical risk assessment for protection of human and environmental health, which makes them interesting to regulators, especially in the absence of experimental data. For compatibility with regulatory use, QSAR models should be transparent, reproducible and optimized to minimize the number of false negatives. In silico QSAR tools are gaining wide acceptance as a faster alternative to otherwise time-consuming clinical and animal testing methods. However, different QSAR tools often make conflictingmore » predictions for a given chemical and may also vary in their predictive performance across different chemical datasets. In a regulatory context, conflicting predictions raise interpretation, validation and adequacy concerns. To address these concerns, ensemble learning techniques in the machine learning paradigm can be used to integrate predictions from multiple tools. By leveraging various underlying QSAR algorithms and training datasets, the resulting consensus prediction should yield better overall predictive ability. We present a novel ensemble QSAR model using Bayesian classification. The model allows for varying a cut-off parameter that allows for a selection in the desirable trade-off between model sensitivity and specificity. The predictive performance of the ensemble model is compared with four in silico tools (Toxtree, Lazar, OECD Toolbox, and Danish QSAR) to predict carcinogenicity for a dataset of air toxins (332 chemicals) and a subset of the gold carcinogenic potency database (480 chemicals). Leave-one-out cross validation results show that the ensemble model achieves the best trade-off between sensitivity and specificity (accuracy: 83.8 % and 80.4 %, and balanced accuracy: 80.6 % and 80.8 %) and highest inter-rater agreement [kappa (κ): 0.63 and 0.62] for both the datasets. The ROC curves demonstrate the utility of the cut-off feature in the predictive ability of the ensemble model. In conclusion, this feature provides an additional control to the regulators in grading a chemical based on the severity of the toxic endpoint under study.« less
Technology Solutions Case Study: Predicting Envelope Leakage in Attached Dwellings
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-11-01
The most common method of measuring air leakage is to perform single (or solo) blower door pressurization and/or depressurization test. In detached housing, the single blower door test measures leakage to the outside. In attached housing, however, this “solo” test method measures both air leakage to the outside and air leakage between adjacent units through common surfaces. In an attempt to create a simplified tool for predicting leakage to the outside, Building America team Consortium for Advanced Residential Buildings (CARB) performed a preliminary statistical analysis on blower door test results from 112 attached dwelling units in four apartment complexes. Althoughmore » the subject data set is limited in size and variety, the preliminary analyses suggest significant predictors are present and support the development of a predictive model. Further data collection is underway to create a more robust prediction tool for use across different construction types, climate zones, and unit configurations.« less
2014-01-01
Background It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. Results We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. Conclusion SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:24776231
Cao, Renzhi; Wang, Zheng; Wang, Yiheng; Cheng, Jianlin
2014-04-28
It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5Å. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637Å. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/.
An automated performance budget estimator: a process for use in instrumentation
NASA Astrophysics Data System (ADS)
Laporte, Philippe; Schnetler, Hermine; Rees, Phil
2016-08-01
Current day astronomy projects continue to increase in size and are increasingly becoming more complex, regardless of the wavelength domain, while risks in terms of safety, cost and operability have to be reduced to ensure an affordable total cost of ownership. All of these drivers have to be considered carefully during the development process of an astronomy project at the same time as there is a big drive to shorten the development life-cycle. From the systems engineering point of view, this evolution is a significant challenge. Big instruments imply management of interfaces within large consortia and dealing with tight design phase schedules which necessitate efficient and rapid interactions between all the stakeholders to firstly ensure that the system is defined correctly and secondly that the designs will meet all the requirements. It is essential that team members respond quickly such that the time available for the design team is maximised. In this context, performance prediction tools can be very helpful during the concept phase of a project to help selecting the best design solution. In the first section of this paper we present the development of such a prediction tool that can be used by the system engineer to determine the overall performance of the system and to evaluate the impact on the science based on the proposed design. This tool can also be used in "what-if" design analysis to assess the impact on the overall performance of the system based on the simulated numbers calculated by the automated system performance prediction tool. Having such a tool available from the beginning of a project can allow firstly for a faster turn-around between the design engineers and the systems engineer and secondly, between the systems engineer and the instrument scientist. Following the first section we described the process for constructing a performance estimator tool, followed by describing three projects in which such a tool has been utilised to illustrate how such a tool have been used in astronomy projects. The three use-cases are; EAGLE, one of the European Extremely Large Telescope (E-ELT) Multi-Object Spectrograph (MOS) instruments that was studied from 2007 to 2009, the Multi-Object Optical and Near-Infrared Spectrograph (MOONS) for the European Southern Observatory's Very Large Telescope (VLT), currently under development and SST-GATE.
Fan Noise Prediction with Applications to Aircraft System Noise Assessment
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Envia, Edmane; Burley, Casey L.
2009-01-01
This paper describes an assessment of current fan noise prediction tools by comparing measured and predicted sideline acoustic levels from a benchmark fan noise wind tunnel test. Specifically, an empirical method and newly developed coupled computational approach are utilized to predict aft fan noise for a benchmark test configuration. Comparisons with sideline noise measurements are performed to assess the relative merits of the two approaches. The study identifies issues entailed in coupling the source and propagation codes, as well as provides insight into the capabilities of the tools in predicting the fan noise source and subsequent propagation and radiation. In contrast to the empirical method, the new coupled computational approach provides the ability to investigate acoustic near-field effects. The potential benefits/costs of these new methods are also compared with the existing capabilities in a current aircraft noise system prediction tool. The knowledge gained in this work provides a basis for improved fan source specification in overall aircraft system noise studies.
NATIONAL URBAN DATABASE AND ACCESS PROTAL TOOL
Current mesoscale weather prediction and microscale dispersion models are limited in their ability to perform accurate assessments in urban areas. A project called the National Urban Database with Access Portal Tool (NUDAPT) is beginning to provide urban data and improve the para...
High-fidelity modeling and impact footprint prediction for vehicle breakup analysis
NASA Astrophysics Data System (ADS)
Ling, Lisa
For decades, vehicle breakup analysis had been performed for space missions that used nuclear heater or power units in order to assess aerospace nuclear safety for potential launch failures leading to inadvertent atmospheric reentry. Such pre-launch risk analysis is imperative to assess possible environmental impacts, obtain launch approval, and for launch contingency planning. In order to accurately perform a vehicle breakup analysis, the analysis tool should include a trajectory propagation algorithm coupled with thermal and structural analyses and influences. Since such a software tool was not available commercially or in the public domain, a basic analysis tool was developed by Dr. Angus McRonald prior to this study. This legacy software consisted of low-fidelity modeling and had the capability to predict vehicle breakup, but did not predict the surface impact point of the nuclear component. Thus the main thrust of this study was to develop and verify the additional dynamics modeling and capabilities for the analysis tool with the objectives to (1) have the capability to predict impact point and footprint, (2) increase the fidelity in the prediction of vehicle breakup, and (3) reduce the effort and time required to complete an analysis. The new functions developed for predicting the impact point and footprint included 3-degrees-of-freedom trajectory propagation, the generation of non-arbitrary entry conditions, sensitivity analysis, and the calculation of impact footprint. The functions to increase the fidelity in the prediction of vehicle breakup included a panel code to calculate the hypersonic aerodynamic coefficients for an arbitrary-shaped body and the modeling of local winds. The function to reduce the effort and time required to complete an analysis included the calculation of node failure criteria. The derivation and development of these new functions are presented in this dissertation, and examples are given to demonstrate the new capabilities and the improvements made, with comparisons between the results obtained from the upgraded analysis tool and the legacy software wherever applicable.
Prediction: The Modern-Day Sport-Science and Sports-Medicine "Quest for the Holy Grail".
McCall, Alan; Fanchini, Maurizio; Coutts, Aaron J
2017-05-01
In high-performance sport, science and medicine practitioners employ a variety of physical and psychological tests, training and match monitoring, and injury-screening tools for a variety of reasons, mainly to predict performance, identify talented individuals, and flag when an injury will occur. The ability to "predict" outcomes such as performance, talent, or injury is arguably sport science and medicine's modern-day equivalent of the "Quest for the Holy Grail." The purpose of this invited commentary is to highlight the common misinterpretation of studies investigating association to those actually analyzing prediction and to provide practitioners with simple recommendations to quickly distinguish between methods pertaining to association and those of prediction.
Kashani-Amin, Elaheh; Tabatabaei-Malazy, Ozra; Sakhteman, Amirhossein; Larijani, Bagher; Ebrahim-Habibi, Azadeh
2018-02-27
Prediction of proteins' secondary structure is one of the major steps in the generation of homology models. These models provide structural information which is used to design suitable ligands for potential medicinal targets. However, selecting a proper tool between multiple secondary structure prediction (SSP) options is challenging. The current study is an insight onto currently favored methods and tools, within various contexts. A systematic review was performed for a comprehensive access to recent (2013-2016) studies which used or recommended protein SSP tools. Three databases, Web of Science, PubMed and Scopus were systematically searched and 99 out of 209 studies were finally found eligible to extract data. Four categories of applications for 59 retrieved SSP tools were: (I) prediction of structural features of a given sequence, (II) evaluation of a method, (III) providing input for a new SSP method and (IV) integrating a SSP tool as a component for a program. PSIPRED was found to be the most popular tool in all four categories. JPred and tools utilizing PHD (Profile network from HeiDelberg) method occupied second and third places of popularity in categories I and II. JPred was only found in the two first categories, while PHD was present in three fields. This study provides a comprehensive insight about the recent usage of SSP tools which could be helpful for selecting a proper tool's choice. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
A predictive pilot model for STOL aircraft landing
NASA Technical Reports Server (NTRS)
Kleinman, D. L.; Killingsworth, W. R.
1974-01-01
An optimal control approach has been used to model pilot performance during STOL flare and landing. The model is used to predict pilot landing performance for three STOL configurations, each having a different level of automatic control augmentation. Model predictions are compared with flight simulator data. It is concluded that the model can be effective design tool for studying analytically the effects of display modifications, different stability augmentation systems, and proposed changes in the landing area geometry.
A New Scheme to Characterize and Identify Protein Ubiquitination Sites.
Nguyen, Van-Nui; Huang, Kai-Yao; Huang, Chien-Hsun; Lai, K Robert; Lee, Tzong-Yi
2017-01-01
Protein ubiquitination, involving the conjugation of ubiquitin on lysine residue, serves as an important modulator of many cellular functions in eukaryotes. Recent advancements in proteomic technology have stimulated increasing interest in identifying ubiquitination sites. However, most computational tools for predicting ubiquitination sites are focused on small-scale data. With an increasing number of experimentally verified ubiquitination sites, we were motivated to design a predictive model for identifying lysine ubiquitination sites for large-scale proteome dataset. This work assessed not only single features, such as amino acid composition (AAC), amino acid pair composition (AAPC) and evolutionary information, but also the effectiveness of incorporating two or more features into a hybrid approach to model construction. The support vector machine (SVM) was applied to generate the prediction models for ubiquitination site identification. Evaluation by five-fold cross-validation showed that the SVM models learned from the combination of hybrid features delivered a better prediction performance. Additionally, a motif discovery tool, MDDLogo, was adopted to characterize the potential substrate motifs of ubiquitination sites. The SVM models integrating the MDDLogo-identified substrate motifs could yield an average accuracy of 68.70 percent. Furthermore, the independent testing result showed that the MDDLogo-clustered SVM models could provide a promising accuracy (78.50 percent) and perform better than other prediction tools. Two cases have demonstrated the effective prediction of ubiquitination sites with corresponding substrate motifs.
ERIC Educational Resources Information Center
Matsanka, Christopher
2017-01-01
The purpose of this non-experimental quantitative study was to investigate the relationship between Pennsylvania's Classroom Diagnostic Tools (CDT) interim assessments and the state-mandated Pennsylvania System of School Assessment (PSSA) and to create linear regression equations that could be used as models to predict student performance on the…
Sharma, Ashok K; Srivastava, Gopal N; Roy, Ankita; Sharma, Vineet K
2017-01-01
The experimental methods for the prediction of molecular toxicity are tedious and time-consuming tasks. Thus, the computational approaches could be used to develop alternative methods for toxicity prediction. We have developed a tool for the prediction of molecular toxicity along with the aqueous solubility and permeability of any molecule/metabolite. Using a comprehensive and curated set of toxin molecules as a training set, the different chemical and structural based features such as descriptors and fingerprints were exploited for feature selection, optimization and development of machine learning based classification and regression models. The compositional differences in the distribution of atoms were apparent between toxins and non-toxins, and hence, the molecular features were used for the classification and regression. On 10-fold cross-validation, the descriptor-based, fingerprint-based and hybrid-based classification models showed similar accuracy (93%) and Matthews's correlation coefficient (0.84). The performances of all the three models were comparable (Matthews's correlation coefficient = 0.84-0.87) on the blind dataset. In addition, the regression-based models using descriptors as input features were also compared and evaluated on the blind dataset. Random forest based regression model for the prediction of solubility performed better ( R 2 = 0.84) than the multi-linear regression (MLR) and partial least square regression (PLSR) models, whereas, the partial least squares based regression model for the prediction of permeability (caco-2) performed better ( R 2 = 0.68) in comparison to the random forest and MLR based regression models. The performance of final classification and regression models was evaluated using the two validation datasets including the known toxins and commonly used constituents of health products, which attests to its accuracy. The ToxiM web server would be a highly useful and reliable tool for the prediction of toxicity, solubility, and permeability of small molecules.
Sharma, Ashok K.; Srivastava, Gopal N.; Roy, Ankita; Sharma, Vineet K.
2017-01-01
The experimental methods for the prediction of molecular toxicity are tedious and time-consuming tasks. Thus, the computational approaches could be used to develop alternative methods for toxicity prediction. We have developed a tool for the prediction of molecular toxicity along with the aqueous solubility and permeability of any molecule/metabolite. Using a comprehensive and curated set of toxin molecules as a training set, the different chemical and structural based features such as descriptors and fingerprints were exploited for feature selection, optimization and development of machine learning based classification and regression models. The compositional differences in the distribution of atoms were apparent between toxins and non-toxins, and hence, the molecular features were used for the classification and regression. On 10-fold cross-validation, the descriptor-based, fingerprint-based and hybrid-based classification models showed similar accuracy (93%) and Matthews's correlation coefficient (0.84). The performances of all the three models were comparable (Matthews's correlation coefficient = 0.84–0.87) on the blind dataset. In addition, the regression-based models using descriptors as input features were also compared and evaluated on the blind dataset. Random forest based regression model for the prediction of solubility performed better (R2 = 0.84) than the multi-linear regression (MLR) and partial least square regression (PLSR) models, whereas, the partial least squares based regression model for the prediction of permeability (caco-2) performed better (R2 = 0.68) in comparison to the random forest and MLR based regression models. The performance of final classification and regression models was evaluated using the two validation datasets including the known toxins and commonly used constituents of health products, which attests to its accuracy. The ToxiM web server would be a highly useful and reliable tool for the prediction of toxicity, solubility, and permeability of small molecules. PMID:29249969
RDNAnalyzer: A tool for DNA secondary structure prediction and sequence analysis.
Afzal, Muhammad; Shahid, Ahmad Ali; Shehzadi, Abida; Nadeem, Shahid; Husnain, Tayyab
2012-01-01
RDNAnalyzer is an innovative computer based tool designed for DNA secondary structure prediction and sequence analysis. It can randomly generate the DNA sequence or user can upload the sequences of their own interest in RAW format. It uses and extends the Nussinov dynamic programming algorithm and has various application for the sequence analysis. It predicts the DNA secondary structure and base pairings. It also provides the tools for routinely performed sequence analysis by the biological scientists such as DNA replication, reverse compliment generation, transcription, translation, sequence specific information as total number of nucleotide bases, ATGC base contents along with their respective percentages and sequence cleaner. RDNAnalyzer is a unique tool developed in Microsoft Visual Studio 2008 using Microsoft Visual C# and Windows Presentation Foundation and provides user friendly environment for sequence analysis. It is freely available. http://www.cemb.edu.pk/sw.html RDNAnalyzer - Random DNA Analyser, GUI - Graphical user interface, XAML - Extensible Application Markup Language.
Design Curve Generation for 3D SiC Fiber Architecture
NASA Technical Reports Server (NTRS)
Lang, Jerry; Dicarlo, James A.
2014-01-01
The design tool provides design curves that allow a simple and quick way to examine multiple factors that can influence the processing and key properties of the preforms and their final SiC-reinforced ceramic composites without over obligating financial capital for the fabricating of materials. Tool predictions for process and fiber fraction properties have been validated for a HNS 3D preform.The virtualization aspect of the tool will be used to provide a quick generation of solid models with actual fiber paths for finite element evaluation to predict mechanical and thermal properties of proposed composites as well as mechanical displacement behavior due to creep and stress relaxation to study load sharing characteristic between constitutes for better performance.Tool predictions for the fiber controlled properties of the SiCSiC CMC fabricated from the HNS preforms will be valuated and up-graded from the measurements on these CMC
NASA Technical Reports Server (NTRS)
Kwak, Dochan
2005-01-01
Over the past 30 years, numerical methods and simulation tools for fluid dynamic problems have advanced as a new discipline, namely, computational fluid dynamics (CFD). Although a wide spectrum of flow regimes are encountered in many areas of science and engineering, simulation of compressible flow has been the major driver for developing computational algorithms and tools. This is probably due to a large demand for predicting the aerodynamic performance characteristics of flight vehicles, such as commercial, military, and space vehicles. As flow analysis is required to be more accurate and computationally efficient for both commercial and mission-oriented applications (such as those encountered in meteorology, aerospace vehicle development, general fluid engineering and biofluid analysis) CFD tools for engineering become increasingly important for predicting safety, performance and cost. This paper presents the author's perspective on the maturity of CFD, especially from an aerospace engineering point of view.
Campanini, Isabella; Mastrangelo, Stefano; Bargellini, Annalisa; Bassoli, Agnese; Bosi, Gabriele; Lombardi, Francesco; Tolomelli, Stefano; Lusuardi, Mirco; Merlo, Andrea
2018-01-11
Falls are a common adverse event in both elderly inpatients and patients admitted to rehabilitation units. The Hendrich Fall Risk Model II (HIIFRM) has been already tested in all hospital wards with high fall rates, with the exception of the rehabilitation setting. This study's aim is to address the feasibility and predictive performances of HIIFRM in a hospital rehabilitation department. A 6 months prospective study in a Italian rehabilitation department with patients from orthopaedic, pulmonary, and neurological rehabilitation wards. All admitted patients were enrolled and assessed within 24 h of admission by means of the HIIFRM. The occurrence of falls was checked and recorded daily. HIIFRM feasibility was assessed as the percentage of successful administrations at admission. HIIFRM predictive performance was determined in terms of area under the Receiver Operating Characteristic (ROC) curve (AUC), best cutoff, sensitivity, specificity, positive and negative predictive values, along with their asymptotic 95% confidence intervals (95% CI). One hundred ninety-one patents were admitted. HIIFRM was feasible in 147 cases (77%), 11 of which suffered a fall (7.5%). Failures in administration were mainly due to bedridden patients (e.g. minimally conscious state, vegetative state). AUC was 0.779(0.685-0.873). The original HIIFRM cutoff of 5 led to a sensitivity of 100% with a mere specificity of 49%(40-57%), thus suggesting using higher cutoffs. Moreover, the median score for non-fallers at rehabilitation units was higher than that reported in literature for geriatric non fallers. The best trade-off between sensitivity and specificity was obtained by using a cutoff of 8. This lead to sensitivity = 73%(46-99%), specificity = 72%(65-80%), positive predictive value = 17% and negative predictive value = 97%. These results support the use of the HIIFRM as a predictive tool. The HIIFRM showed satisfactory feasibility and predictive performances in rehabilitation wards. Based on both available literature and these results, the prediction of falls among all hospital wards, with high risk of falling, could be achieved by means of a unique tool and two different cutoffs: a standard cutoff of 5 in geriatric wards and an adjusted higher cutoff in rehabilitation units, with predictive performances similar to those of the best-preforming pathology specific tools for fall-risk assessment.
Predicting Airport Screening Officers' Visual Search Competency With a Rapid Assessment.
Mitroff, Stephen R; Ericson, Justin M; Sharpe, Benjamin
2018-03-01
Objective The study's objective was to assess a new personnel selection and assessment tool for aviation security screeners. A mobile app was modified to create a tool, and the question was whether it could predict professional screeners' on-job performance. Background A variety of professions (airport security, radiology, the military, etc.) rely on visual search performance-being able to detect targets. Given the importance of such professions, it is necessary to maximize performance, and one means to do so is to select individuals who excel at visual search. A critical question is whether it is possible to predict search competency within a professional search environment. Method Professional searchers from the USA Transportation Security Administration (TSA) completed a rapid assessment on a tablet-based X-ray simulator (XRAY Screener, derived from the mobile technology app Airport Scanner; Kedlin Company). The assessment contained 72 trials that were simulated X-ray images of bags. Participants searched for prohibited items and tapped on them with their finger. Results Performance on the assessment significantly related to on-job performance measures for the TSA officers such that those who were better XRAY Screener performers were both more accurate and faster at the actual airport checkpoint. Conclusion XRAY Screener successfully predicted on-job performance for professional aviation security officers. While questions remain about the underlying cognitive mechanisms, this quick assessment was found to significantly predict on-job success for a task that relies on visual search performance. Application It may be possible to quickly assess an individual's visual search competency, which could help organizations select new hires and assess their current workforce.
Analysis and Design of Rotors at Ultra-Low Reynolds Numbers
NASA Technical Reports Server (NTRS)
Kunz, Peter J.; Strawn, Roger C.
2003-01-01
Design tools have been developed for ultra-low Reynolds number rotors, combining enhanced actuator-ring / blade-element theory with airfoil section data based on two-dimensional Navier-Stokes calculations. This performance prediction method is coupled with an optimizer for both design and analysis applications. Performance predictions from these tools have been compared with three-dimensional Navier Stokes analyses and experimental data for a 2.5 cm diameter rotor with chord Reynolds numbers below 10,000. Comparisons among the analyses and experimental data show reasonable agreement both in the global thrust and power required, but the spanwise distributions of these quantities exhibit significant deviations. The study also reveals that three-dimensional and rotational effects significantly change local airfoil section performance. The magnitude of this issue, unique to this operating regime, may limit the applicability of blade-element type methods for detailed rotor design at ultra-low Reynolds numbers, but these methods are still useful for evaluating concept feasibility and rapidly generating initial designs for further analysis and optimization using more advanced tools.
Chimpanzees create and modify probe tools functionally: A study with zoo-housed chimpanzees.
Hopper, Lydia M; Tennie, Claudio; Ross, Stephen R; Lonsdorf, Elizabeth V
2015-02-01
Chimpanzees (Pan troglodytes) use tools to probe for out-of-reach food, both in the wild and in captivity. Beyond gathering appropriately-sized materials to create tools, chimpanzees also perform secondary modifications in order to create an optimized tool. In this study, we recorded the behavior of a group of zoo-housed chimpanzees when presented with opportunities to use tools to probe for liquid foods in an artificial termite mound within their enclosure. Previous research with this group of chimpanzees has shown that they are proficient at gathering materials from within their environment in order to create tools to probe for the liquid food within the artificial mound. Extending beyond this basic question, we first asked whether they only made and modified probe tools when it was appropriate to do so (i.e. when the mound was baited with food). Second, by collecting continuous data on their behavior, we also asked whether the chimpanzees first (intentionally) modified their tools prior to probing for food or whether such modifications occurred after tool use, possibly as a by-product of chewing and eating the food from the tools. Following our predictions, we found that tool modification predicted tool use; the chimpanzees began using their tools within a short delay of creating and modifying them, and the chimpanzees performed more tool modifying behaviors when food was available than when they could not gain food through the use of probe tools. We also discuss our results in terms of the chimpanzees' acquisition of the skills, and their flexibility of tool use and learning. © 2014 Wiley Periodicals, Inc.
Defining sarcopenia in terms of incident adverse outcomes.
Woo, Jean; Leung, Jason; Morley, J E
2015-03-01
The objectives of this study were to compare the performance of different diagnoses of sarcopenia using European Working Group on Sarcopenia in Older People, International Working Group on Sarcopenia, and the US Foundation of National Institutes of Health (FNIH) criteria, and the screening tool SARC-F, against the Asian Working Group for Sarcopenia consensus panel definitions, in predicting physical limitation, slow walking speed, and repeated chair stand performance, days of hospital stay and mortality at follow up. Longitudinal study. Community survey in Hong Kong. Participants were 4000 men and women 65 years and older living in the community. Information from questionnaire regarding activities of daily living, physical functioning limitations, and constituent questions of SARC-F; body mass index (BMI), grip strength (GS), walking speed, and appendicular muscle mass (ASM). FNIH, consensus panel definitions, and the screening tool SARC-F all have similar AUC values in predicting incident physical limitation and physical performance measures at 4 years, walking speed at 7 years, days of hospital stay at 7 years, and mortality at 10 years. None of the definitions predicted increase in physical limitation at 4 years or mortality at 10 years in women, and none predicted all the adverse outcomes. The highest AUC values were observed for walking speed at 4 and 7 years. When applied to a Chinese elderly population, criteria used for diagnosis of sarcopenia derived from European, Asian, and international consensus panels, from US cutoff values defined from incident physical limitation, and the SARC-F screening tool, all have similar performance in predicting incident physical limitation and mortality. Copyright © 2015 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Analysis of Aurora's Performance Simulation Engine for Three Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, Janine; Simon, Joseph
2015-07-07
Aurora Solar Inc. is building a cloud-based optimization platform to automate the design, engineering, and permit generation process of solar photovoltaic (PV) installations. They requested that the National Renewable Energy Laboratory (NREL) validate the performance of the PV system performance simulation engine of Aurora Solar’s solar design platform, Aurora. In previous work, NREL performed a validation of multiple other PV modeling tools 1, so this study builds upon that work by examining all of the same fixed-tilt systems with available module datasheets that NREL selected and used in the aforementioned study. Aurora Solar set up these three operating PV systemsmore » in their modeling platform using NREL-provided system specifications and concurrent weather data. NREL then verified the setup of these systems, ran the simulations, and compared the Aurora-predicted performance data to measured performance data for those three systems, as well as to performance data predicted by other PV modeling tools.« less
Mallika, V; Sivakumar, K C; Jaichand, S; Soniya, E V
2010-07-13
Type III Polyketide synthases (PKS) are family of proteins considered to have significant roles in the biosynthesis of various polyketides in plants, fungi and bacteria. As these proteins shows positive effects to human health, more researches are going on regarding this particular protein. Developing a tool to identify the probability of sequence being a type III polyketide synthase will minimize the time consumption and manpower efforts. In this approach, we have designed and implemented PKSIIIpred, a high performance prediction server for type III PKS where the classifier is Support Vector Machines (SVMs). Based on the limited training dataset, the tool efficiently predicts the type III PKS superfamily of proteins with high sensitivity and specificity. The PKSIIIpred is available at http://type3pks.in/prediction/. We expect that this tool may serve as a useful resource for type III PKS researchers. Currently work is being progressed for further betterment of prediction accuracy by including more sequence features in the training dataset.
Das, Koel; Giesbrecht, Barry; Eckstein, Miguel P
2010-07-15
Within the past decade computational approaches adopted from the field of machine learning have provided neuroscientists with powerful new tools for analyzing neural data. For instance, previous studies have applied pattern classification algorithms to electroencephalography data to predict the category of presented visual stimuli, human observer decision choices and task difficulty. Here, we quantitatively compare the ability of pattern classifiers and three ERP metrics (peak amplitude, mean amplitude, and onset latency of the face-selective N170) to predict variations across individuals' behavioral performance in a difficult perceptual task identifying images of faces and cars embedded in noise. We investigate three different pattern classifiers (Classwise Principal Component Analysis, CPCA; Linear Discriminant Analysis, LDA; and Support Vector Machine, SVM), five training methods differing in the selection of training data sets and three analyses procedures for the ERP measures. We show that all three pattern classifier algorithms surpass traditional ERP measurements in their ability to predict individual differences in performance. Although the differences across pattern classifiers were not large, the CPCA method with training data sets restricted to EEG activity for trials in which observers expressed high confidence about their decisions performed the highest at predicting perceptual performance of observers. We also show that the neural activity predicting the performance across individuals was distributed through time starting at 120ms, and unlike the face-selective ERP response, sustained for more than 400ms after stimulus presentation, indicating that both early and late components contain information correlated with observers' behavioral performance. Together, our results further demonstrate the potential of pattern classifiers compared to more traditional ERP techniques as an analysis tool for modeling spatiotemporal dynamics of the human brain and relating neural activity to behavior. Copyright 2010 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Wang, John T.; Bomarito, Geoffrey F.
2016-01-01
This study implements a plasticity tool to predict the nonlinear shear behavior of unidirectional composite laminates under multiaxial loadings, with an intent to further develop the tool for use in composite progressive damage analysis. The steps for developing the plasticity tool include establishing a general quadratic yield function, deriving the incremental elasto-plastic stress-strain relations using the yield function with associated flow rule, and integrating the elasto-plastic stress-strain relations with a modified Euler method and a substepping scheme. Micromechanics analyses are performed to obtain normal and shear stress-strain curves that are used in determining the plasticity parameters of the yield function. By analyzing a micromechanics model, a virtual testing approach is used to replace costly experimental tests for obtaining stress-strain responses of composites under various loadings. The predicted elastic moduli and Poisson's ratios are in good agreement with experimental data. The substepping scheme for integrating the elasto-plastic stress-strain relations is suitable for working with displacement-based finite element codes. An illustration problem is solved to show that the plasticity tool can predict the nonlinear shear behavior for a unidirectional laminate subjected to multiaxial loadings.
An Experimental and Theoretical Study on Cavitating Propellers.
1982-10-01
34 And Identfyp eV &to" nMeeJ cascade flow theoretical supercavitating flow performance prediction method partially cavitating flow supercavitating ...the present work was to develop an analytical tool for predicting the off-design performance of supercavitating propellers over a wide range of...operating conditions. Due to the complex nature of the flow phenomena, a lifting line theory sirply combined with the two-dimensional supercavitating
NASA Technical Reports Server (NTRS)
Morey, Susan; Prevot, Thomas; Mercer, Joey; Martin, Lynne; Bienert, Nancy; Cabrall, Christopher; Hunt, Sarah; Homola, Jeffrey; Kraut, Joshua
2013-01-01
A human-in-the-loop simulation was conducted to examine the effects of varying levels of trajectory prediction uncertainty on air traffic controller workload and performance, as well as how strategies and the use of decision support tools change in response. This paper focuses on the strategies employed by two controllers from separate teams who worked in parallel but independently under identical conditions (airspace, arrival traffic, tools) with the goal of ensuring schedule conformance and safe separation for a dense arrival flow in en route airspace. Despite differences in strategy and methods, both controllers achieved high levels of schedule conformance and safe separation. Overall, results show that trajectory uncertainties introduced by wind and aircraft performance prediction errors do not affect the controllers' ability to manage traffic. Controller strategies were fairly robust to changes in error, though strategies were affected by the amount of delay to absorb (scheduled time of arrival minus estimated time of arrival). Using the results and observations, this paper proposes an ability to dynamically customize the display of information including delay time based on observed error to better accommodate different strategies and objectives.
Space Mission Human Reliability Analysis (HRA) Project
NASA Technical Reports Server (NTRS)
Boyer, Roger
2014-01-01
The purpose of the Space Mission Human Reliability Analysis (HRA) Project is to extend current ground-based HRA risk prediction techniques to a long-duration, space-based tool. Ground-based HRA methodology has been shown to be a reasonable tool for short-duration space missions, such as Space Shuttle and lunar fly-bys. However, longer-duration deep-space missions, such as asteroid and Mars missions, will require the crew to be in space for as long as 400 to 900 day missions with periods of extended autonomy and self-sufficiency. Current indications show higher risk due to fatigue, physiological effects due to extended low gravity environments, and others, may impact HRA predictions. For this project, Safety & Mission Assurance (S&MA) will work with Human Health & Performance (HH&P) to establish what is currently used to assess human reliabiilty for human space programs, identify human performance factors that may be sensitive to long duration space flight, collect available historical data, and update current tools to account for performance shaping factors believed to be important to such missions. This effort will also contribute data to the Human Performance Data Repository and influence the Space Human Factors Engineering research risks and gaps (part of the HRP Program). An accurate risk predictor mitigates Loss of Crew (LOC) and Loss of Mission (LOM).The end result will be an updated HRA model that can effectively predict risk on long-duration missions.
Kazan, Roy; Viezel-Mathieu, Alex; Cyr, Shantale; Hemmerling, Thomas M; Lin, Samuel J; Gilardino, Mirko S
2018-04-09
To identify new tools capable of predicting surgical performance of novices on an augmentation mammoplasty simulator. The pace of technical skills acquisition varies between residents and may necessitate more time than that allotted by residency training before reaching competence. Identifying applicants with superior innate technical abilities might shorten learning curves and the time to reach competence. The objective of this study is to identify new tools that could predict surgical performance of novices on a mammoplasty simulator. We recruited 14 medical students and recorded their performance in 2 skill-games: Mikado and Perplexus Epic, and in 2 video games: Star War Racer (Sony Playstation 3) and Super Monkey Ball 2 (Nintendo Wii). Then, each participant performed an augmentation mammoplasty procedure on a Mammoplasty Part-task Trainer, which allows the simulation of the essential steps of the procedure. The average age of participants was 25.4 years. Correlation studies showed significant association between Perplexus Epic, Star Wars Racer, Super Monkey Ball scores and the modified OSATS score with r s = 0.8491 (p < 0.001), r s = -0.6941 (p = 0.005), and r s = 0.7309 (p < 0.003), but not with the Mikado score r s = -0.0255 (p = 0.9). Linear regressions were strongest for Perplexus Epic and Super Monkey Ball scores with coefficients of determination of 0.59 and 0.55, respectively. A combined score (Perplexus/Super-Monkey-Ball) was computed and showed a significant correlation with the modified OSATS score having an r s = 0.8107 (p < 0.001) and R 2 = 0.75, respectively. This study identified a combination of skill games that correlated to better performance of novices on a surgical simulator. With refinement, such tools could serve to help screen plastic surgery applicants and identify those with higher surgical performance predictors. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Identifying and Validating Selection Tools for Predicting Officer Performance and Retention
2017-05-01
Performance composite. Findings: Simple bivariate correlations indicated that the RBI Fitness Motivation scale was the strongest predictor of...Scored Job Knowledge Tests (JKTs) ............................................................ 14 Self-Report: Career History Survey (CHS...36 Bivariate Correlations
A novel adjuvant to the resident selection process: the hartman value profile.
Cone, Jeffrey D; Byrum, C Stephen; Payne, Wyatt G; Smith, David J
2012-01-01
The goal of resident selection is twofold: (1) select candidates who will be successful residents and eventually successful practitioners and (2) avoid selecting candidates who will be unsuccessful residents and/or eventually unsuccessful practitioners. Traditional tools used to select residents have well-known limitations. The Hartman Value Profile (HVP) is a proven adjuvant tool to predicting future performance in candidates for advanced positions in the corporate setting. No literature exists to indicate use of the HVP for resident selection. The HVP evaluates the structure and the dynamics of an individual value system. Given the potential impact, we implemented its use beginning in 2007 as an adjuvant tool to the traditional selection process. Experience gained from incorporating the HVP into the residency selection process suggests that it may add objectivity and refinement in predicting resident performance. Further evaluation is warranted with longer follow-up times.
A Novel Adjuvant to the Resident Selection Process: the Hartman Value Profile
Cone, Jeffrey D.; Byrum, C. Stephen; Payne, Wyatt G.; Smith, David J.
2012-01-01
Objectives: The goal of resident selection is twofold: (1) select candidates who will be successful residents and eventually successful practitioners and (2) avoid selecting candidates who will be unsuccessful residents and/or eventually unsuccessful practitioners. Traditional tools used to select residents have well-known limitations. The Hartman Value Profile (HVP) is a proven adjuvant tool to predicting future performance in candidates for advanced positions in the corporate setting. Methods: No literature exists to indicate use of the HVP for resident selection. Results: The HVP evaluates the structure and the dynamics of an individual value system. Given the potential impact, we implemented its use beginning in 2007 as an adjuvant tool to the traditional selection process. Conclusions: Experience gained from incorporating the HVP into the residency selection process suggests that it may add objectivity and refinement in predicting resident performance. Further evaluation is warranted with longer follow-up times. PMID:22720114
Extending BPM Environments of Your Choice with Performance Related Decision Support
NASA Astrophysics Data System (ADS)
Fritzsche, Mathias; Picht, Michael; Gilani, Wasif; Spence, Ivor; Brown, John; Kilpatrick, Peter
What-if Simulations have been identified as one solution for business performance related decision support. Such support is especially useful in cases where it can be automatically generated out of Business Process Management (BPM) Environments from the existing business process models and performance parameters monitored from the executed business process instances. Currently, some of the available BPM Environments offer basic-level performance prediction capabilities. However, these functionalities are normally too limited to be generally useful for performance related decision support at business process level. In this paper, an approach is presented which allows the non-intrusive integration of sophisticated tooling for what-if simulations, analytic performance prediction tools, process optimizations or a combination of such solutions into already existing BPM environments. The approach abstracts from process modelling techniques which enable automatic decision support spanning processes across numerous BPM Environments. For instance, this enables end-to-end decision support for composite processes modelled with the Business Process Modelling Notation (BPMN) on top of existing Enterprise Resource Planning (ERP) processes modelled with proprietary languages.
Artificial neural network prediction of aircraft aeroelastic behavior
NASA Astrophysics Data System (ADS)
Pesonen, Urpo Juhani
An Artificial Neural Network that predicts aeroelastic behavior of aircraft is presented. The neural net was designed to predict the shape of a flexible wing in static flight conditions using results from a structural analysis and an aerodynamic analysis performed with traditional computational tools. To generate reliable training and testing data for the network, an aeroelastic analysis code using these tools as components was designed and validated. To demonstrate the advantages and reliability of Artificial Neural Networks, a network was also designed and trained to predict airfoil maximum lift at low Reynolds numbers where wind tunnel data was used for the training. Finally, a neural net was designed and trained to predict the static aeroelastic behavior of a wing without the need to iterate between the structural and aerodynamic solvers.
Antibody specific epitope prediction-emergence of a new paradigm.
Sela-Culang, Inbal; Ofran, Yanay; Peters, Bjoern
2015-04-01
The development of accurate tools for predicting B-cell epitopes is important but difficult. Traditional methods have examined which regions in an antigen are likely binding sites of an antibody. However, it is becoming increasingly clear that most antigen surface residues will be able to bind one or more of the myriad of possible antibodies. In recent years, new approaches have emerged for predicting an epitope for a specific antibody, utilizing information encoded in antibody sequence or structure. Applying such antibody-specific predictions to groups of antibodies in combination with easily obtainable experimental data improves the performance of epitope predictions. We expect that further advances of such tools will be possible with the integration of immunoglobulin repertoire sequencing data. Copyright © 2015 Elsevier B.V. All rights reserved.
AOP-informed assessment of endocrine disruption in freshwater crustaceans
To date, most research focused on developing more efficient and cost effective methods to predict toxicity have focused on human biology. However, there is also a need for effective high throughput tools to predict toxicity to other species that perform critical ecosystem functio...
Predictive Behavior of a Computational Foot/Ankle Model through Artificial Neural Networks.
Chande, Ruchi D; Hargraves, Rosalyn Hobson; Ortiz-Robinson, Norma; Wayne, Jennifer S
2017-01-01
Computational models are useful tools to study the biomechanics of human joints. Their predictive performance is heavily dependent on bony anatomy and soft tissue properties. Imaging data provides anatomical requirements while approximate tissue properties are implemented from literature data, when available. We sought to improve the predictive capability of a computational foot/ankle model by optimizing its ligament stiffness inputs using feedforward and radial basis function neural networks. While the former demonstrated better performance than the latter per mean square error, both networks provided reasonable stiffness predictions for implementation into the computational model.
Kimmel, Lara A; Holland, Anne E; Simpson, Pam M; Edwards, Elton R; Gabbe, Belinda J
2014-07-01
Early, accurate prediction of discharge destination from the acute hospital assists individual patients and the wider hospital system. The Trauma Rehabilitation and Prediction Tool (TRaPT), developed using registry data, determines probability of inpatient rehabilitation discharge for patients with isolated lower limb fractures. The aims of this study were: (1) to prospectively validatate the TRaPT, (2) to assess whether its performance could be improved by adding additional demographic data, and (3) to simplify it for use as a bedside tool. This was a cohort, measurement-focused study. Patients with isolated lower limb fractures (N=114) who were admitted to a major trauma center in Melbourne, Australia, were included. The participants' TRaPT scores were calculated from admission data. Performance of the TRaPT score alone, and in combination with frailty, weight-bearing status, and home supports, was assessed using measures of discrimination and calibration. A simplified TRaPT was developed by rounding the coefficients of variables in the original model and grouping age into 8 categories. Simplified TRaPT performance measures, including specificity, sensitivity, and positive and negative predictive values, were evaluated. Prospective validation of the TRaPT showed excellent discrimination (C-statistic=0.90 [95% confidence interval=0.82, 0.97]), a sensitivity of 80%, and specificity of 94%. All participants able to weight bear were discharged directly home. Simplified TRaPT scores had a sensitivity of 80% and a specificity of 88%. Generalizability may be limited given the compensation system that exists in Australia, but the methods used will assist in designing a similar tool in any population. The TRaPT accurately predicted discharge destination for 80% of patients and may form a useful aid for discharge decision making, with the simplified version facilitating its use as a bedside tool. © 2014 American Physical Therapy Association.
SAVANT: Solar Array Verification and Analysis Tool Demonstrated
NASA Technical Reports Server (NTRS)
Chock, Ricaurte
2000-01-01
The photovoltaics (PV) industry is now being held to strict specifications, such as end-oflife power requirements, that force them to overengineer their products to avoid contractual penalties. Such overengineering has been the only reliable way to meet such specifications. Unfortunately, it also results in a more costly process than is probably necessary. In our conversations with the PV industry, the issue of cost has been raised again and again. Consequently, the Photovoltaics and Space Environment Effects branch at the NASA Glenn Research Center at Lewis Field has been developing a software tool to address this problem. SAVANT, Glenn's tool for solar array verification and analysis is in the technology demonstration phase. Ongoing work has proven that more efficient and less costly PV designs should be possible by using SAVANT to predict the on-orbit life-cycle performance. The ultimate goal of the SAVANT project is to provide a user-friendly computer tool to predict PV on-orbit life-cycle performance. This should greatly simplify the tasks of scaling and designing the PV power component of any given flight or mission. By being able to predict how a particular PV article will perform, designers will be able to balance mission power requirements (both beginning-of-life and end-of-life) with survivability concerns such as power degradation due to radiation and/or contamination. Recent comparisons with actual flight data from the Photovoltaic Array Space Power Plus Diagnostics (PASP Plus) mission validate this approach.
Olondo, C; Legarda, F; Herranz, M; Idoeta, R
2017-04-01
This paper shows the procedure performed to validate the migration equation and the migration parameters' values presented in a previous paper (Legarda et al., 2011) regarding the migration of 137 Cs in Spanish mainland soils. In this paper, this model validation has been carried out checking experimentally obtained activity concentration values against those predicted by the model. This experimental data come from the measured vertical activity profiles of 8 new sampling points which are located in northern Spain. Before testing predicted values of the model, the uncertainty of those values has been assessed with the appropriate uncertainty analysis. Once establishing the uncertainty of the model, both activity concentration values, experimental versus model predicted ones, have been compared. Model validation has been performed analyzing its accuracy, studying it as a whole and also at different depth intervals. As a result, this model has been validated as a tool to predict 137 Cs behaviour in a Mediterranean environment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Micro-Vibration Performance Prediction of SEPTA24 Using SMeSim (RUAG Space Mechanism Simulator Tool)
NASA Astrophysics Data System (ADS)
Omiciuolo, Manolo; Lang, Andreas; Wismer, Stefan; Barth, Stephan; Szekely, Gerhard
2013-09-01
Scientific space missions are currently challenging the performances of their payloads. The performances can be dramatically restricted by micro-vibration loads generated by any moving parts of the satellites, thus by Solar Array Drive Assemblies too. Micro-vibration prediction of SADAs is therefore very important to support their design and optimization in the early stages of a programme. The Space Mechanism Simulator (SMeSim) tool, developed by RUAG, enhances the capability of analysing the micro-vibration emissivity of a Solar Array Drive Assembly (SADA) under a specified set of boundary conditions. The tool is developed in the Matlab/Simulink® environment throughout a library of blocks simulating the different components a SADA is made of. The modular architecture of the blocks, assembled by the user, and the set up of the boundary conditions allow time-domain and frequency-domain analyses of a rigid multi-body model with concentrated flexibilities and coupled- electronic control of the mechanism. SMeSim is used to model the SEPTA24 Solar Array Drive Mechanism and predict its micro-vibration emissivity. SMeSim and the return of experience earned throughout its development and use can now support activities like verification by analysis of micro-vibration emissivity requirements and/or design optimization to minimize the micro- vibration emissivity of a SADA.
Osteoporosis risk prediction using machine learning and conventional methods.
Kim, Sung Kean; Yoo, Tae Keun; Oh, Ein; Kim, Deok Won
2013-01-01
A number of clinical decision tools for osteoporosis risk assessment have been developed to select postmenopausal women for the measurement of bone mineral density. We developed and validated machine learning models with the aim of more accurately identifying the risk of osteoporosis in postmenopausal women, and compared with the ability of a conventional clinical decision tool, osteoporosis self-assessment tool (OST). We collected medical records from Korean postmenopausal women based on the Korea National Health and Nutrition Surveys (KNHANES V-1). The training data set was used to construct models based on popular machine learning algorithms such as support vector machines (SVM), random forests (RF), artificial neural networks (ANN), and logistic regression (LR) based on various predictors associated with low bone density. The learning models were compared with OST. SVM had significantly better area under the curve (AUC) of the receiver operating characteristic (ROC) than ANN, LR, and OST. Validation on the test set showed that SVM predicted osteoporosis risk with an AUC of 0.827, accuracy of 76.7%, sensitivity of 77.8%, and specificity of 76.0%. We were the first to perform comparisons of the performance of osteoporosis prediction between the machine learning and conventional methods using population-based epidemiological data. The machine learning methods may be effective tools for identifying postmenopausal women at high risk for osteoporosis.
Prediction Of Abrasive And Diffusive Tool Wear Mechanisms In Machining
NASA Astrophysics Data System (ADS)
Rizzuti, S.; Umbrello, D.
2011-01-01
Tool wear prediction is regarded as very important task in order to maximize tool performance, minimize cutting costs and improve the quality of workpiece in cutting. In this research work, an experimental campaign was carried out at the varying of cutting conditions with the aim to measure both crater and flank tool wear, during machining of an AISI 1045 with an uncoated carbide tool P40. Parallel a FEM-based analysis was developed in order to study the tool wear mechanisms, taking also into account the influence of the cutting conditions and the temperature reached on the tool surfaces. The results show that, when the temperature of the tool rake surface is lower than the activation temperature of the diffusive phenomenon, the wear rate can be estimated applying an abrasive model. In contrast, in the tool area where the temperature is higher than the diffusive activation temperature, the wear rate can be evaluated applying a diffusive model. Finally, for a temperature ranges within the above cited values an adopted abrasive-diffusive wear model furnished the possibility to correctly evaluate the tool wear phenomena.
Chimpanzees create and modify probe tools functionally: A study with zoo-housed chimpanzees
Hopper, Lydia M; Tennie, Claudio; Ross, Stephen R; Lonsdorf, Elizabeth V
2015-01-01
Chimpanzees (Pan troglodytes) use tools to probe for out-of-reach food, both in the wild and in captivity. Beyond gathering appropriately-sized materials to create tools, chimpanzees also perform secondary modifications in order to create an optimized tool. In this study, we recorded the behavior of a group of zoo-housed chimpanzees when presented with opportunities to use tools to probe for liquid foods in an artificial termite mound within their enclosure. Previous research with this group of chimpanzees has shown that they are proficient at gathering materials from within their environment in order to create tools to probe for the liquid food within the artificial mound. Extending beyond this basic question, we first asked whether they only made and modified probe tools when it was appropriate to do so (i.e. when the mound was baited with food). Second, by collecting continuous data on their behavior, we also asked whether the chimpanzees first (intentionally) modified their tools prior to probing for food or whether such modifications occurred after tool use, possibly as a by-product of chewing and eating the food from the tools. Following our predictions, we found that tool modification predicted tool use; the chimpanzees began using their tools within a short delay of creating and modifying them, and the chimpanzees performed more tool modifying behaviors when food was available than when they could not gain food through the use of probe tools. We also discuss our results in terms of the chimpanzees’ acquisition of the skills, and their flexibility of tool use and learning. Am. J. Primatol. 77:162–170, 2015. © 2014 The Authors. American Journal of Primatology Published by Wiley Periodicals Inc. PMID:25220050
Tang, Haiming; Thomas, Paul D
2016-07-15
PANTHER-PSEP is a new software tool for predicting non-synonymous genetic variants that may play a causal role in human disease. Several previous variant pathogenicity prediction methods have been proposed that quantify evolutionary conservation among homologous proteins from different organisms. PANTHER-PSEP employs a related but distinct metric based on 'evolutionary preservation': homologous proteins are used to reconstruct the likely sequences of ancestral proteins at nodes in a phylogenetic tree, and the history of each amino acid can be traced back in time from its current state to estimate how long that state has been preserved in its ancestors. Here, we describe the PSEP tool, and assess its performance on standard benchmarks for distinguishing disease-associated from neutral variation in humans. On these benchmarks, PSEP outperforms not only previous tools that utilize evolutionary conservation, but also several highly used tools that include multiple other sources of information as well. For predicting pathogenic human variants, the trace back of course starts with a human 'reference' protein sequence, but the PSEP tool can also be applied to predicting deleterious or pathogenic variants in reference proteins from any of the ∼100 other species in the PANTHER database. PANTHER-PSEP is freely available on the web at http://pantherdb.org/tools/csnpScoreForm.jsp Users can also download the command-line based tool at ftp://ftp.pantherdb.org/cSNP_analysis/PSEP/ CONTACT: pdthomas@usc.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Chavez-Gibson, Sarah
2013-01-01
The purpose of this study is to exam in-depth, the Comprehensive, Powerful, Academic Database (CPAD), a data decision-making tool that determines and identifies students at-risk of dropping out of school, and how the CPAD assists administrators and teachers at an elementary campus to monitor progress, curriculum, and performance to improve student…
Automatically rating trainee skill at a pediatric laparoscopic suturing task.
Oquendo, Yousi A; Riddle, Elijah W; Hiller, Dennis; Blinman, Thane A; Kuchenbecker, Katherine J
2018-04-01
Minimally invasive surgeons must acquire complex technical skills while minimizing patient risk, a challenge that is magnified in pediatric surgery. Trainees need realistic practice with frequent detailed feedback, but human grading is tedious and subjective. We aim to validate a novel motion-tracking system and algorithms that automatically evaluate trainee performance of a pediatric laparoscopic suturing task. Subjects (n = 32) ranging from medical students to fellows performed two trials of intracorporeal suturing in a custom pediatric laparoscopic box trainer after watching a video of ideal performance. The motions of the tools and endoscope were recorded over time using a magnetic sensing system, and both tool grip angles were recorded using handle-mounted flex sensors. An expert rated the 63 trial videos on five domains from the Objective Structured Assessment of Technical Skill (OSATS), yielding summed scores from 5 to 20. Motion data from each trial were processed to calculate 280 features. We used regularized least squares regression to identify the most predictive features from different subsets of the motion data and then built six regression tree models that predict summed OSATS score. Model accuracy was evaluated via leave-one-subject-out cross-validation. The model that used all sensor data streams performed best, achieving 71% accuracy at predicting summed scores within 2 points, 89% accuracy within 4, and a correlation of 0.85 with human ratings. 59% of the rounded average OSATS score predictions were perfect, and 100% were within 1 point. This model employed 87 features, including none based on completion time, 77 from tool tip motion, 3 from tool tip visibility, and 7 from grip angle. Our novel hardware and software automatically rated previously unseen trials with summed OSATS scores that closely match human expert ratings. Such a system facilitates more feedback-intensive surgical training and may yield insights into the fundamental components of surgical skill.
Biodiversity in environmental assessment-current practice and tools for prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gontier, Mikael; Balfors, Berit; Moertberg, Ulla
Habitat loss and fragmentation are major threats to biodiversity. Environmental impact assessment and strategic environmental assessment are essential instruments used in physical planning to address such problems. Yet there are no well-developed methods for quantifying and predicting impacts of fragmentation on biodiversity. In this study, a literature review was conducted on GIS-based ecological models that have potential as prediction tools for biodiversity assessment. Further, a review of environmental impact statements for road and railway projects from four European countries was performed, to study how impact prediction concerning biodiversity issues was addressed. The results of the study showed the existing gapmore » between research in GIS-based ecological modelling and current practice in biodiversity assessment within environmental assessment.« less
Metabolic pathways for the whole community.
Hanson, Niels W; Konwar, Kishori M; Hawley, Alyse K; Altman, Tomer; Karp, Peter D; Hallam, Steven J
2014-07-22
A convergence of high-throughput sequencing and computational power is transforming biology into information science. Despite these technological advances, converting bits and bytes of sequence information into meaningful insights remains a challenging enterprise. Biological systems operate on multiple hierarchical levels from genomes to biomes. Holistic understanding of biological systems requires agile software tools that permit comparative analyses across multiple information levels (DNA, RNA, protein, and metabolites) to identify emergent properties, diagnose system states, or predict responses to environmental change. Here we adopt the MetaPathways annotation and analysis pipeline and Pathway Tools to construct environmental pathway/genome databases (ePGDBs) that describe microbial community metabolism using MetaCyc, a highly curated database of metabolic pathways and components covering all domains of life. We evaluate Pathway Tools' performance on three datasets with different complexity and coding potential, including simulated metagenomes, a symbiotic system, and the Hawaii Ocean Time-series. We define accuracy and sensitivity relationships between read length, coverage and pathway recovery and evaluate the impact of taxonomic pruning on ePGDB construction and interpretation. Resulting ePGDBs provide interactive metabolic maps, predict emergent metabolic pathways associated with biosynthesis and energy production and differentiate between genomic potential and phenotypic expression across defined environmental gradients. This multi-tiered analysis provides the user community with specific operating guidelines, performance metrics and prediction hazards for more reliable ePGDB construction and interpretation. Moreover, it demonstrates the power of Pathway Tools in predicting metabolic interactions in natural and engineered ecosystems.
StructRNAfinder: an automated pipeline and web server for RNA families prediction.
Arias-Carrasco, Raúl; Vásquez-Morán, Yessenia; Nakaya, Helder I; Maracaja-Coutinho, Vinicius
2018-02-17
The function of many noncoding RNAs (ncRNAs) depend upon their secondary structures. Over the last decades, several methodologies have been developed to predict such structures or to use them to functionally annotate RNAs into RNA families. However, to fully perform this analysis, researchers should utilize multiple tools, which require the constant parsing and processing of several intermediate files. This makes the large-scale prediction and annotation of RNAs a daunting task even to researchers with good computational or bioinformatics skills. We present an automated pipeline named StructRNAfinder that predicts and annotates RNA families in transcript or genome sequences. This single tool not only displays the sequence/structural consensus alignments for each RNA family, according to Rfam database but also provides a taxonomic overview for each assigned functional RNA. Moreover, we implemented a user-friendly web service that allows researchers to upload their own nucleotide sequences in order to perform the whole analysis. Finally, we provided a stand-alone version of StructRNAfinder to be used in large-scale projects. The tool was developed under GNU General Public License (GPLv3) and is freely available at http://structrnafinder.integrativebioinformatics.me . The main advantage of StructRNAfinder relies on the large-scale processing and integrating the data obtained by each tool and database employed along the workflow, of which several files are generated and displayed in user-friendly reports, useful for downstream analyses and data exploration.
Acoustic prediction methods for the NASA generalized advanced propeller analysis system (GAPAS)
NASA Technical Reports Server (NTRS)
Padula, S. L.; Block, P. J. W.
1984-01-01
Classical methods of propeller performance analysis are coupled with state-of-the-art Aircraft Noise Prediction Program (ANOPP:) techniques to yield a versatile design tool, the NASA Generalized Advanced Propeller Analysis System (GAPAS) for the novel quiet and efficient propellers. ANOPP is a collection of modular specialized programs. GAPAS as a whole addresses blade geometry and aerodynamics, rotor performance and loading, and subsonic propeller noise.
NASA Technical Reports Server (NTRS)
Omura, J. K.; Simon, M. K.
1982-01-01
A theory is presented for deducing and predicting the performance of transmitter/receivers for bandwidth efficient modulations suitable for use on the linear satellite channel. The underlying principle used is the development of receiver structures based on the maximum-likelihood decision rule. The application of the performance prediction tools, e.g., channel cutoff rate and bit error probability transfer function bounds to these modulation/demodulation techniques.
A cross-validation package driving Netica with python
Fienen, Michael N.; Plant, Nathaniel G.
2014-01-01
Bayesian networks (BNs) are powerful tools for probabilistically simulating natural systems and emulating process models. Cross validation is a technique to avoid overfitting resulting from overly complex BNs. Overfitting reduces predictive skill. Cross-validation for BNs is known but rarely implemented due partly to a lack of software tools designed to work with available BN packages. CVNetica is open-source, written in Python, and extends the Netica software package to perform cross-validation and read, rebuild, and learn BNs from data. Insights gained from cross-validation and implications on prediction versus description are illustrated with: a data-driven oceanographic application; and a model-emulation application. These examples show that overfitting occurs when BNs become more complex than allowed by supporting data and overfitting incurs computational costs as well as causing a reduction in prediction skill. CVNetica evaluates overfitting using several complexity metrics (we used level of discretization) and its impact on performance metrics (we used skill).
NASA Technical Reports Server (NTRS)
Mercer, Joey S.; Bienert, Nancy; Gomez, Ashley; Hunt, Sarah; Kraut, Joshua; Martin, Lynne; Morey, Susan; Green, Steven M.; Prevot, Thomas; Wu, Minghong G.
2013-01-01
A Human-In-The-Loop air traffic control simulation investigated the impact of uncertainties in trajectory predictions on NextGen Trajectory-Based Operations concepts, seeking to understand when the automation would become unacceptable to controllers or when performance targets could no longer be met. Retired air traffic controllers staffed two en route transition sectors, delivering arrival traffic to the northwest corner-post of Atlanta approach control under time-based metering operations. Using trajectory-based decision-support tools, the participants worked the traffic under varying levels of wind forecast error and aircraft performance model error, impacting the ground automations ability to make accurate predictions. Results suggest that the controllers were able to maintain high levels of performance, despite even the highest levels of trajectory prediction errors.
A ligand predication tool based on modeling and reasoning with imprecise probabilistic knowledge.
Liu, Weiru; Yue, Anbu; Timson, David J
2010-04-01
Ligand prediction has been driven by a fundamental desire to understand more about how biomolecules recognize their ligands and by the commercial imperative to develop new drugs. Most of the current available software systems are very complex and time-consuming to use. Therefore, developing simple and efficient tools to perform initial screening of interesting compounds is an appealing idea. In this paper, we introduce our tool for very rapid screening for likely ligands (either substrates or inhibitors) based on reasoning with imprecise probabilistic knowledge elicited from past experiments. Probabilistic knowledge is input to the system via a user-friendly interface showing a base compound structure. A prediction of whether a particular compound is a substrate is queried against the acquired probabilistic knowledge base and a probability is returned as an indication of the prediction. This tool will be particularly useful in situations where a number of similar compounds have been screened experimentally, but information is not available for all possible members of that group of compounds. We use two case studies to demonstrate how to use the tool. 2009 Elsevier Ireland Ltd. All rights reserved.
Pathway Tools version 13.0: integrated software for pathway/genome informatics and systems biology
Paley, Suzanne M.; Krummenacker, Markus; Latendresse, Mario; Dale, Joseph M.; Lee, Thomas J.; Kaipa, Pallavi; Gilham, Fred; Spaulding, Aaron; Popescu, Liviu; Altman, Tomer; Paulsen, Ian; Keseler, Ingrid M.; Caspi, Ron
2010-01-01
Pathway Tools is a production-quality software environment for creating a type of model-organism database called a Pathway/Genome Database (PGDB). A PGDB such as EcoCyc integrates the evolving understanding of the genes, proteins, metabolic network and regulatory network of an organism. This article provides an overview of Pathway Tools capabilities. The software performs multiple computational inferences including prediction of metabolic pathways, prediction of metabolic pathway hole fillers and prediction of operons. It enables interactive editing of PGDBs by DB curators. It supports web publishing of PGDBs, and provides a large number of query and visualization tools. The software also supports comparative analyses of PGDBs, and provides several systems biology analyses of PGDBs including reachability analysis of metabolic networks, and interactive tracing of metabolites through a metabolic network. More than 800 PGDBs have been created using Pathway Tools by scientists around the world, many of which are curated DBs for important model organisms. Those PGDBs can be exchanged using a peer-to-peer DB sharing system called the PGDB Registry. PMID:19955237
Liau, Siow Yen; Mohamed Izham, M I; Hassali, M A; Shafie, A A
2010-01-01
Cardiovascular diseases, the main causes of hospitalisations and death globally, have put an enormous economic burden on the healthcare system. Several risk factors are associated with the occurrence of cardiovascular events. At the heart of efficient prevention of cardiovascular disease is the concept of risk assessment. This paper aims to review the available cardiovascular risk-assessment tools and its applicability in predicting cardiovascular risk among Asian populations. A systematic search was performed using keywords as MeSH and Boolean terms. A total of 25 risk-assessment tools were identified. Of these, only two risk-assessment tools (8%) were derived from an Asian population. These risk-assessment tools differ in various ways, including characteristics of the derivation sample, type of study, time frame of follow-up, end points, statistical analysis and risk factors included. Very few cardiovascular risk-assessment tools were developed in Asian populations. In order to accurately predict the cardiovascular risk of our population, there is a need to develop a risk-assessment tool based on local epidemiological data.
Prakash, Rangasamy; Krishnaraj, Vijayan; Zitoune, Redouane; Sheikh-Ahmad, Jamal
2016-01-01
Carbon fiber reinforced polymers (CFRPs) have found wide-ranging applications in numerous industrial fields such as aerospace, automotive, and shipping industries due to their excellent mechanical properties that lead to enhanced functional performance. In this paper, an experimental study on edge trimming of CFRP was done with various cutting conditions and different geometry of tools such as helical-, fluted-, and burr-type tools. The investigation involves the measurement of cutting forces for the different machining conditions and its effect on the surface quality of the trimmed edges. The modern cutting tools (router tools or burr tools) selected for machining CFRPs, have complex geometries in cutting edges and surfaces, and therefore a traditional method of direct tool wear evaluation is not applicable. An acoustic emission (AE) sensing was employed for on-line monitoring of the performance of router tools to determine the relationship between AE signal and length of machining for different kinds of geometry of tools. The investigation showed that the router tool with a flat cutting edge has better performance by generating lower cutting force and better surface finish with no delamination on trimmed edges. The mathematical modeling for the prediction of cutting forces was also done using Artificial Neural Network and Regression Analysis. PMID:28773919
RDNAnalyzer: A tool for DNA secondary structure prediction and sequence analysis
Afzal, Muhammad; Shahid, Ahmad Ali; Shehzadi, Abida; Nadeem, Shahid; Husnain, Tayyab
2012-01-01
RDNAnalyzer is an innovative computer based tool designed for DNA secondary structure prediction and sequence analysis. It can randomly generate the DNA sequence or user can upload the sequences of their own interest in RAW format. It uses and extends the Nussinov dynamic programming algorithm and has various application for the sequence analysis. It predicts the DNA secondary structure and base pairings. It also provides the tools for routinely performed sequence analysis by the biological scientists such as DNA replication, reverse compliment generation, transcription, translation, sequence specific information as total number of nucleotide bases, ATGC base contents along with their respective percentages and sequence cleaner. RDNAnalyzer is a unique tool developed in Microsoft Visual Studio 2008 using Microsoft Visual C# and Windows Presentation Foundation and provides user friendly environment for sequence analysis. It is freely available. Availability http://www.cemb.edu.pk/sw.html Abbreviations RDNAnalyzer - Random DNA Analyser, GUI - Graphical user interface, XAML - Extensible Application Markup Language. PMID:23055611
NREL Improves Building Energy Simulation Programs Through Diagnostic Testing (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2012-01-01
This technical highlight describes NREL research to develop Building Energy Simulation Test for Existing Homes (BESTEST-EX) to increase the quality and accuracy of energy analysis tools for the building retrofit market. Researchers at the National Renewable Energy Laboratory (NREL) have developed a new test procedure to increase the quality and accuracy of energy analysis tools for the building retrofit market. The Building Energy Simulation Test for Existing Homes (BESTEST-EX) is a test procedure that enables software developers to evaluate the performance of their audit tools in modeling energy use and savings in existing homes when utility bills are available formore » model calibration. Similar to NREL's previous energy analysis tests, such as HERS BESTEST and other BESTEST suites included in ANSI/ASHRAE Standard 140, BESTEST-EX compares software simulation findings to reference results generated with state-of-the-art simulation tools such as EnergyPlus, SUNREL, and DOE-2.1E. The BESTEST-EX methodology: (1) Tests software predictions of retrofit energy savings in existing homes; (2) Ensures building physics calculations and utility bill calibration procedures perform to a minimum standard; and (3) Quantifies impacts of uncertainties in input audit data and occupant behavior. BESTEST-EX includes building physics and utility bill calibration test cases. The diagram illustrates the utility bill calibration test cases. Participants are given input ranges and synthetic utility bills. Software tools use the utility bills to calibrate key model inputs and predict energy savings for the retrofit cases. Participant energy savings predictions using calibrated models are compared to NREL predictions using state-of-the-art building energy simulation programs.« less
NASA Technical Reports Server (NTRS)
Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.; Parrish, Keith A.; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.
2004-01-01
The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal-optical, often referred to as STOP, analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. Temperatures predicted using geometric and thermal math models are mapped to a structural finite element model in order to predict thermally induced deformations. Motions and deformations at optical surfaces are then input to optical models, and optical performance is predicted using either an optical ray trace or a linear optical analysis tool. In addition to baseline performance predictions, a process for performing sensitivity studies to assess modeling uncertainties is described.
Wide-Area Traffic Management for Cloud Services
2012-04-01
performance prediction tools [11], which are usually load oblivious. Therefore, without information about link loads and capacities, a CDN may direct...powerful tool . DONAR allows its customers to dictate a replica’s (i) split weight, wi, the desired proportion of requests that a particular replica i...Diagnostic Tool (NDT) [100], which is used for the Federal Communication Commission’s Consumer Broadband Test, and NPAD [101]—are more closely integrated with
The Rangeland Hydrology and Erosion Model: A dynamic approach for predicting soil loss on rangelands
USDA-ARS?s Scientific Manuscript database
In this study we present the improved Rangeland Hydrology and Erosion Model (RHEM V2.3), a process-based erosion prediction tool specific for rangeland application. The article provides the mathematical formulation of the model and parameter estimation equations. Model performance is assessed agains...
Zhang, Cunji; Yao, Xifan; Zhang, Jianming; Jin, Hong
2016-05-31
Tool breakage causes losses of surface polishing and dimensional accuracy for machined part, or possible damage to a workpiece or machine. Tool Condition Monitoring (TCM) is considerably vital in the manufacturing industry. In this paper, an indirect TCM approach is introduced with a wireless triaxial accelerometer. The vibrations in the three vertical directions (x, y and z) are acquired during milling operations, and the raw signals are de-noised by wavelet analysis. These features of de-noised signals are extracted in the time, frequency and time-frequency domains. The key features are selected based on Pearson's Correlation Coefficient (PCC). The Neuro-Fuzzy Network (NFN) is adopted to predict the tool wear and Remaining Useful Life (RUL). In comparison with Back Propagation Neural Network (BPNN) and Radial Basis Function Network (RBFN), the results show that the NFN has the best performance in the prediction of tool wear and RUL.
Software tool for portal dosimetry research.
Vial, P; Hunt, P; Greer, P B; Oliver, L; Baldock, C
2008-09-01
This paper describes a software tool developed for research into the use of an electronic portal imaging device (EPID) to verify dose for intensity modulated radiation therapy (IMRT) beams. A portal dose image prediction (PDIP) model that predicts the EPID response to IMRT beams has been implemented into a commercially available treatment planning system (TPS). The software tool described in this work was developed to modify the TPS PDIP model by incorporating correction factors into the predicted EPID image to account for the difference in EPID response to open beam radiation and multileaf collimator (MLC) transmitted radiation. The processes performed by the software tool include; i) read the MLC file and the PDIP from the TPS, ii) calculate the fraction of beam-on time that each point in the IMRT beam is shielded by MLC leaves, iii) interpolate correction factors from look-up tables, iv) create a corrected PDIP image from the product of the original PDIP and the correction factors and write the corrected image to file, v) display, analyse, and export various image datasets. The software tool was developed using the Microsoft Visual Studio.NET framework with the C# compiler. The operation of the software tool was validated. This software provided useful tools for EPID dosimetry research, and it is being utilised and further developed in ongoing EPID dosimetry and IMRT dosimetry projects.
Kaserer, Teresa; Temml, Veronika; Kutil, Zsofia; Vanek, Tomas; Landa, Premysl; Schuster, Daniela
2015-01-01
Computational methods can be applied in drug development for the identification of novel lead candidates, but also for the prediction of pharmacokinetic properties and potential adverse effects, thereby aiding to prioritize and identify the most promising compounds. In principle, several techniques are available for this purpose, however, which one is the most suitable for a specific research objective still requires further investigation. Within this study, the performance of several programs, representing common virtual screening methods, was compared in a prospective manner. First, we selected top-ranked virtual screening hits from the three methods pharmacophore modeling, shape-based modeling, and docking. For comparison, these hits were then additionally predicted by external pharmacophore- and 2D similarity-based bioactivity profiling tools. Subsequently, the biological activities of the selected hits were assessed in vitro, which allowed for evaluating and comparing the prospective performance of the applied tools. Although all methods performed well, considerable differences were observed concerning hit rates, true positive and true negative hits, and hitlist composition. Our results suggest that a rational selection of the applied method represents a powerful strategy to maximize the success of a research project, tightly linked to its aims. We employed cyclooxygenase as application example, however, the focus of this study lied on highlighting the differences in the virtual screening tool performances and not in the identification of novel COX-inhibitors. Copyright © 2015 The Authors. Published by Elsevier Masson SAS.. All rights reserved.
Sperschneider, Jana; Williams, Angela H; Hane, James K; Singh, Karam B; Taylor, Jennifer M
2015-01-01
The steadily increasing number of sequenced fungal and oomycete genomes has enabled detailed studies of how these eukaryotic microbes infect plants and cause devastating losses in food crops. During infection, fungal and oomycete pathogens secrete effector molecules which manipulate host plant cell processes to the pathogen's advantage. Proteinaceous effectors are synthesized intracellularly and must be externalized to interact with host cells. Computational prediction of secreted proteins from genomic sequences is an important technique to narrow down the candidate effector repertoire for subsequent experimental validation. In this study, we benchmark secretion prediction tools on experimentally validated fungal and oomycete effectors. We observe that for a set of fungal SwissProt protein sequences, SignalP 4 and the neural network predictors of SignalP 3 (D-score) and SignalP 2 perform best. For effector prediction in particular, the use of a sensitive method can be desirable to obtain the most complete candidate effector set. We show that the neural network predictors of SignalP 2 and 3, as well as TargetP were the most sensitive tools for fungal effector secretion prediction, whereas the hidden Markov model predictors of SignalP 2 and 3 were the most sensitive tools for oomycete effectors. Thus, previous versions of SignalP retain value for oomycete effector prediction, as the current version, SignalP 4, was unable to reliably predict the signal peptide of the oomycete Crinkler effectors in the test set. Our assessment of subcellular localization predictors shows that cytoplasmic effectors are often predicted as not extracellular. This limits the reliability of secretion predictions that depend on these tools. We present our assessment with a view to informing future pathogenomics studies and suggest revised pipelines for secretion prediction to obtain optimal effector predictions in fungi and oomycetes.
Performance assessment of a Bayesian Forecasting System (BFS) for real-time flood forecasting
NASA Astrophysics Data System (ADS)
Biondi, D.; De Luca, D. L.
2013-02-01
SummaryThe paper evaluates, for a number of flood events, the performance of a Bayesian Forecasting System (BFS), with the aim of evaluating total uncertainty in real-time flood forecasting. The predictive uncertainty of future streamflow is estimated through the Bayesian integration of two separate processors. The former evaluates the propagation of input uncertainty on simulated river discharge, the latter computes the hydrological uncertainty of actual river discharge associated with all other possible sources of error. A stochastic model and a distributed rainfall-runoff model were assumed, respectively, for rainfall and hydrological response simulations. A case study was carried out for a small basin in the Calabria region (southern Italy). The performance assessment of the BFS was performed with adequate verification tools suited for probabilistic forecasts of continuous variables such as streamflow. Graphical tools and scalar metrics were used to evaluate several attributes of the forecast quality of the entire time-varying predictive distributions: calibration, sharpness, accuracy, and continuous ranked probability score (CRPS). Besides the overall system, which incorporates both sources of uncertainty, other hypotheses resulting from the BFS properties were examined, corresponding to (i) a perfect hydrological model; (ii) a non-informative rainfall forecast for predicting streamflow; and (iii) a perfect input forecast. The results emphasize the importance of using different diagnostic approaches to perform comprehensive analyses of predictive distributions, to arrive at a multifaceted view of the attributes of the prediction. For the case study, the selected criteria revealed the interaction of the different sources of error, in particular the crucial role of the hydrological uncertainty processor when compensating, at the cost of wider forecast intervals, for the unreliable and biased predictive distribution resulting from the Precipitation Uncertainty Processor.
Post-Flight Assessment of Low Density Supersonic Decelerator Flight Dynamics Test 2 Simulation
NASA Technical Reports Server (NTRS)
Dutta, Soumyo; Bowes, Angela L.; White, Joseph P.; Striepe, Scott A.; Queen, Eric M.; O'Farrel, Clara; Ivanov, Mark C.
2016-01-01
NASA's Low Density Supersonic Decelerator (LDSD) project conducted its second Supersonic Flight Dynamics Test (SFDT-2) on June 8, 2015. The Program to Optimize Simulated Trajectories II (POST2) was one of the flight dynamics tools used to simulate and predict the flight performance and was a major tool used in the post-flight assessment of the flight trajectory. This paper compares the simulation predictions with the reconstructed trajectory. Additionally, off-nominal conditions seen during flight are modeled in the simulation to reconcile the predictions with flight data. These analyses are beneficial to characterize the results of the flight test and to improve the simulation and targeting of the subsequent LDSD flights.
Gruber, Andreas R; Bernhart, Stephan H; Lorenz, Ronny
2015-01-01
The ViennaRNA package is a widely used collection of programs for thermodynamic RNA secondary structure prediction. Over the years, many additional tools have been developed building on the core programs of the package to also address issues related to noncoding RNA detection, RNA folding kinetics, or efficient sequence design considering RNA-RNA hybridizations. The ViennaRNA web services provide easy and user-friendly web access to these tools. This chapter describes how to use this online platform to perform tasks such as prediction of minimum free energy structures, prediction of RNA-RNA hybrids, or noncoding RNA detection. The ViennaRNA web services can be used free of charge and can be accessed via http://rna.tbi.univie.ac.at.
Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation
NASA Technical Reports Server (NTRS)
DePriest, Douglas; Morgan, Carolyn
2003-01-01
The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.
A Final Approach Trajectory Model for Current Operations
NASA Technical Reports Server (NTRS)
Gong, Chester; Sadovsky, Alexander
2010-01-01
Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.
Task scheduling in dataflow computer architectures
NASA Technical Reports Server (NTRS)
Katsinis, Constantine
1994-01-01
Dataflow computers provide a platform for the solution of a large class of computational problems, which includes digital signal processing and image processing. Many typical applications are represented by a set of tasks which can be repetitively executed in parallel as specified by an associated dataflow graph. Research in this area aims to model these architectures, develop scheduling procedures, and predict the transient and steady state performance. Researchers at NASA have created a model and developed associated software tools which are capable of analyzing a dataflow graph and predicting its runtime performance under various resource and timing constraints. These models and tools were extended and used in this work. Experiments using these tools revealed certain properties of such graphs that require further study. Specifically, the transient behavior at the beginning of the execution of a graph can have a significant effect on the steady state performance. Transformation and retiming of the application algorithm and its initial conditions can produce a different transient behavior and consequently different steady state performance. The effect of such transformations on the resource requirements or under resource constraints requires extensive study. Task scheduling to obtain maximum performance (based on user-defined criteria), or to satisfy a set of resource constraints, can also be significantly affected by a transformation of the application algorithm. Since task scheduling is performed by heuristic algorithms, further research is needed to determine if new scheduling heuristics can be developed that can exploit such transformations. This work has provided the initial development for further long-term research efforts. A simulation tool was completed to provide insight into the transient and steady state execution of a dataflow graph. A set of scheduling algorithms was completed which can operate in conjunction with the modeling and performance tools previously developed. Initial studies on the performance of these algorithms were done to examine the effects of application algorithm transformations as measured by such quantities as number of processors, time between outputs, time between input and output, communication time, and memory size.
Cavitation in liquid cryogens. 4: Combined correlations for venturi, hydrofoil, ogives, and pumps
NASA Technical Reports Server (NTRS)
Hord, J.
1974-01-01
The results of a series of experimental and analytical cavitation studies are presented. Cross-correlation is performed of the developed cavity data for a venturi, a hydrofoil and three scaled ogives. The new correlating parameter, MTWO, improves data correlation for these stationary bodies and for pumping equipment. Existing techniques for predicting the cavitating performance of pumping machinery were extended to include variations in flow coefficient, cavitation parameter, and equipment geometry. The new predictive formulations hold promise as a design tool and universal method for correlating pumping machinery performance. Application of these predictive formulas requires prescribed cavitation test data or an independent method of estimating the cavitation parameter for each pump. The latter would permit prediction of performance without testing; potential methods for evaluating the cavitation parameter prior to testing are suggested.
NASA Astrophysics Data System (ADS)
Di Lorenzo, R.; Ingarao, G.; Fonti, V.
2007-05-01
The crucial task in the prevention of ductile fracture is the availability of a tool for the prediction of such defect occurrence. The technical literature presents a wide investigation on this topic and many contributions have been given by many authors following different approaches. The main class of approaches regards the development of fracture criteria: generally, such criteria are expressed by determining a critical value of a damage function which depends on stress and strain paths: ductile fracture is assumed to occur when such critical value is reached during the analysed process. There is a relevant drawback related to the utilization of ductile fracture criteria; in fact each criterion usually has good performances in the prediction of fracture for particular stress - strain paths, i.e. it works very well for certain processes but may provide no good results for other processes. On the other hand, the approaches based on damage mechanics formulation are very effective from a theoretical point of view but they are very complex and their proper calibration is quite difficult. In this paper, two different approaches are investigated to predict fracture occurrence in cold forming operations. The final aim of the proposed method is the achievement of a tool which has a general reliability i.e. it is able to predict fracture for different forming processes. The proposed approach represents a step forward within a research project focused on the utilization of innovative predictive tools for ductile fracture. The paper presents a comparison between an artificial neural network design procedure and an approach based on statistical tools; both the approaches were aimed to predict fracture occurrence/absence basing on a set of stress and strain paths data. The proposed approach is based on the utilization of experimental data available, for a given material, on fracture occurrence in different processes. More in detail, the approach consists in the analysis of experimental tests in which fracture occurs followed by the numerical simulations of such processes in order to track the stress-strain paths in the workpiece region where fracture is expected. Such data are utilized to build up a proper data set which was utilized both to train an artificial neural network and to perform a statistical analysis aimed to predict fracture occurrence. The developed statistical tool is properly designed and optimized and is able to recognize the fracture occurrence. The reliability and predictive capability of the statistical method were compared with the ones obtained from an artificial neural network developed to predict fracture occurrence. Moreover, the approach is validated also in forming processes characterized by a complex fracture mechanics.
Mueller, Martina; Wagner, Carol L; Annibale, David J; Knapp, Rebecca G; Hulsey, Thomas C; Almeida, Jonas S
2006-03-01
Approximately 30% of intubated preterm infants with respiratory distress syndrome (RDS) will fail attempted extubation, requiring reintubation and mechanical ventilation. Although ventilator technology and monitoring of premature infants have improved over time, optimal extubation remains challenging. Furthermore, extubation decisions for premature infants require complex informational processing, techniques implicitly learned through clinical practice. Computer-aided decision-support tools would benefit inexperienced clinicians, especially during peak neonatal intensive care unit (NICU) census. A five-step procedure was developed to identify predictive variables. Clinical expert (CE) thought processes comprised one model. Variables from that model were used to develop two mathematical models for the decision-support tool: an artificial neural network (ANN) and a multivariate logistic regression model (MLR). The ranking of the variables in the three models was compared using the Wilcoxon Signed Rank Test. The best performing model was used in a web-based decision-support tool with a user interface implemented in Hypertext Markup Language (HTML) and the mathematical model employing the ANN. CEs identified 51 potentially predictive variables for extubation decisions for an infant on mechanical ventilation. Comparisons of the three models showed a significant difference between the ANN and the CE (p = 0.0006). Of the original 51 potentially predictive variables, the 13 most predictive variables were used to develop an ANN as a web-based decision-tool. The ANN processes user-provided data and returns the prediction 0-1 score and a novelty index. The user then selects the most appropriate threshold for categorizing the prediction as a success or failure. Furthermore, the novelty index, indicating the similarity of the test case to the training case, allows the user to assess the confidence level of the prediction with regard to how much the new data differ from the data originally used for the development of the prediction tool. State-of-the-art, machine-learning methods can be employed for the development of sophisticated tools to aid clinicians' decisions. We identified numerous variables considered relevant for extubation decisions for mechanically ventilated premature infants with RDS. We then developed a web-based decision-support tool for clinicians which can be made widely available and potentially improve patient care world wide.
Low Cost Gas Turbine Off-Design Prediction Technique
NASA Astrophysics Data System (ADS)
Martinjako, Jeremy
This thesis seeks to further explore off-design point operation of gas turbines and to examine the capabilities of GasTurb 12 as a tool for off-design analysis. It is a continuation of previous thesis work which initially explored the capabilities of GasTurb 12. The research is conducted in order to: 1) validate GasTurb 12 and, 2) predict off-design performance of the Garrett GTCP85-98D located at the Arizona State University Tempe campus. GasTurb 12 is validated as an off-design point tool by using the program to predict performance of an LM2500+ marine gas turbine. Haglind and Elmegaard (2009) published a paper detailing a second off-design point method and it includes the manufacturer's off-design point data for the LM2500+. GasTurb 12 is used to predict off-design point performance of the LM2500+ and compared to the manufacturer's data. The GasTurb 12 predictions show good correlation. Garrett has published specification data for the GTCP85-98D. This specification data is analyzed to determine the design point and to comment on off-design trends. Arizona State University GTCP85-98D off-design experimental data is evaluated. Trends presented in the data are commented on and explained. The trends match the expected behavior demonstrated in the specification data for the same gas turbine system. It was originally intended that a model of the GTCP85-98D be constructed in GasTurb 12 and used to predict off-design performance. The prediction would be compared to collected experimental data. This is not possible because the free version of GasTurb 12 used in this research does not have a module to model a single spool turboshaft. This module needs to be purchased for this analysis.
Wind Prediction Accuracy for Air Traffic Management Decision Support Tools
NASA Technical Reports Server (NTRS)
Cole, Rod; Green, Steve; Jardin, Matt; Schwartz, Barry; Benjamin, Stan
2000-01-01
The performance of Air Traffic Management and flight deck decision support tools depends in large part on the accuracy of the supporting 4D trajectory predictions. This is particularly relevant to conflict prediction and active advisories for the resolution of conflicts and the conformance with of traffic-flow management flow-rate constraints (e.g., arrival metering / required time of arrival). Flight test results have indicated that wind prediction errors may represent the largest source of trajectory prediction error. The tests also discovered relatively large errors (e.g., greater than 20 knots), existing in pockets of space and time critical to ATM DST performance (one or more sectors, greater than 20 minutes), are inadequately represented by the classic RMS aggregate prediction-accuracy studies of the past. To facilitate the identification and reduction of DST-critical wind-prediction errors, NASA has lead a collaborative research and development activity with MIT Lincoln Laboratories and the Forecast Systems Lab of the National Oceanographic and Atmospheric Administration (NOAA). This activity, begun in 1996, has focussed on the development of key metrics for ATM DST performance, assessment of wind-prediction skill for state of the art systems, and development/validation of system enhancements to improve skill. A 13 month study was conducted for the Denver Center airspace in 1997. Two complementary wind-prediction systems were analyzed and compared to the forecast performance of the then standard 60 km Rapid Update Cycle - version 1 (RUC-1). One system, developed by NOAA, was the prototype 40-km RUC-2 that became operational at NCEP in 1999. RUC-2 introduced a faster cycle (1 hr vs. 3 hr) and improved mesoscale physics. The second system, Augmented Winds (AW), is a prototype en route wind application developed by MITLL based on the Integrated Terminal Wind System (ITWS). AW is run at a local facility (Center) level, and updates RUC predictions based on an optimal interpolation of the latest ACARS reports since the RUC run. This paper presents an overview of the study's results including the identification and use of new large mor wind-prediction accuracy metrics that are key to ATM DST performance.
Computational Fluid Dynamic Investigation of Loss Mechanisms in a Pulse-Tube Refrigerator
NASA Astrophysics Data System (ADS)
Martin, K.; Esguerra, J.; Dodson, C.; Razani, A.
2015-12-01
In predicting Pulse-Tube Cryocooler (PTC) performance, One-Dimensional (1-D) PTR design and analysis tools such as Gedeon Associates SAGE® typically include models for performance degradation due to thermodynamically irreversible processes. SAGE®, in particular, accounts for convective loss, turbulent conductive loss and numerical diffusion “loss” via correlation functions based on analysis and empirical testing. In this study, we compare CFD and SAGE® estimates of PTR refrigeration performance for four distinct pulse-tube lengths. Performance predictions from PTR CFD models are compared to SAGE® predictions for all four cases. Then, to further demonstrate the benefits of higher-fidelity and multidimensional CFD simulation, the PTR loss mechanisms are characterized in terms of their spatial and temporal locations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borges, Ronaldo C.; D'Auria, Francesco; Alvim, Antonio Carlos M.
2002-07-01
The Code with - the capability of - Internal Assessment of Uncertainty (CIAU) is a tool proposed by the 'Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione (DIMNP)' of the University of Pisa. Other Institutions including the nuclear regulatory body from Brazil, 'Comissao Nacional de Energia Nuclear', contributed to the development of the tool. The CIAU aims at providing the currently available Relap5/Mod3.2 system code with the integrated capability of performing not only relevant transient calculations but also the related estimates of uncertainty bands. The Uncertainty Methodology based on Accuracy Extrapolation (UMAE) is used to characterize the uncertainty in themore » prediction of system code calculations for light water reactors and is internally coupled with the above system code. Following an overview of the CIAU development, the present paper deals with the independent qualification of the tool. The qualification test is performed by estimating the uncertainty bands that should envelope the prediction of the Angra 1 NPP transient RES-11. 99 originated by an inadvertent complete load rejection that caused the reactor scram when the unit was operating at 99% of nominal power. The current limitation of the 'error' database, implemented into the CIAU prevented a final demonstration of the qualification. However, all the steps for the qualification process are demonstrated. (authors)« less
BRAKER1: Unsupervised RNA-Seq-Based Genome Annotation with GeneMark-ET and AUGUSTUS.
Hoff, Katharina J; Lange, Simone; Lomsadze, Alexandre; Borodovsky, Mark; Stanke, Mario
2016-03-01
Gene finding in eukaryotic genomes is notoriously difficult to automate. The task is to design a work flow with a minimal set of tools that would reach state-of-the-art performance across a wide range of species. GeneMark-ET is a gene prediction tool that incorporates RNA-Seq data into unsupervised training and subsequently generates ab initio gene predictions. AUGUSTUS is a gene finder that usually requires supervised training and uses information from RNA-Seq reads in the prediction step. Complementary strengths of GeneMark-ET and AUGUSTUS provided motivation for designing a new combined tool for automatic gene prediction. We present BRAKER1, a pipeline for unsupervised RNA-Seq-based genome annotation that combines the advantages of GeneMark-ET and AUGUSTUS. As input, BRAKER1 requires a genome assembly file and a file in bam-format with spliced alignments of RNA-Seq reads to the genome. First, GeneMark-ET performs iterative training and generates initial gene structures. Second, AUGUSTUS uses predicted genes for training and then integrates RNA-Seq read information into final gene predictions. In our experiments, we observed that BRAKER1 was more accurate than MAKER2 when it is using RNA-Seq as sole source for training and prediction. BRAKER1 does not require pre-trained parameters or a separate expert-prepared training step. BRAKER1 is available for download at http://bioinf.uni-greifswald.de/bioinf/braker/ and http://exon.gatech.edu/GeneMark/ katharina.hoff@uni-greifswald.de or borodovsky@gatech.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Wong, Hoong-Seam; Subramaniam, Shridevi; Alias, Zarifah; Taib, Nur Aishah; Ho, Gwo-Fuang; Ng, Char-Hong; Yip, Cheng-Har; Verkooijen, Helena M.; Hartman, Mikael; Bhoo-Pathy, Nirmala
2015-01-01
Abstract Web-based prognostication tools may provide a simple and economically feasible option to aid prognostication and selection of chemotherapy in early breast cancers. We validated PREDICT, a free online breast cancer prognostication and treatment benefit tool, in a resource-limited setting. All 1480 patients who underwent complete surgical treatment for stages I to III breast cancer from 1998 to 2006 were identified from the prospective breast cancer registry of University Malaya Medical Centre, Kuala Lumpur, Malaysia. Calibration was evaluated by comparing the model-predicted overall survival (OS) with patients’ actual OS. Model discrimination was tested using receiver-operating characteristic (ROC) analysis. Median age at diagnosis was 50 years. The median tumor size at presentation was 3 cm and 54% of patients had lymph node-negative disease. About 55% of women had estrogen receptor-positive breast cancer. Overall, the model-predicted 5 and 10-year OS was 86.3% and 77.5%, respectively, whereas the observed 5 and 10-year OS was 87.6% (difference: −1.3%) and 74.2% (difference: 3.3%), respectively; P values for goodness-of-fit test were 0.18 and 0.12, respectively. The program was accurate in most subgroups of patients, but significantly overestimated survival in patients aged <40 years, and in those receiving neoadjuvant chemotherapy. PREDICT performed well in terms of discrimination; areas under ROC curve were 0.78 (95% confidence interval [CI]: 0.74–0.81) and 0.73 (95% CI: 0.68–0.78) for 5 and 10-year OS, respectively. Based on its accurate performance in this study, PREDICT may be clinically useful in prognosticating women with breast cancer and personalizing breast cancer treatment in resource-limited settings. PMID:25715267
An Empiric HIV Risk Scoring Tool to Predict HIV-1 Acquisition in African Women.
Balkus, Jennifer E; Brown, Elizabeth; Palanee, Thesla; Nair, Gonasagrie; Gafoor, Zakir; Zhang, Jingyang; Richardson, Barbra A; Chirenje, Zvavahera M; Marrazzo, Jeanne M; Baeten, Jared M
2016-07-01
To develop and validate an HIV risk assessment tool to predict HIV acquisition among African women. Data were analyzed from 3 randomized trials of biomedical HIV prevention interventions among African women (VOICE, HPTN 035, and FEM-PrEP). We implemented standard methods for the development of clinical prediction rules to generate a risk-scoring tool to predict HIV acquisition over the course of 1 year. Performance of the score was assessed through internal and external validations. The final risk score resulting from multivariable modeling included age, married/living with a partner, partner provides financial or material support, partner has other partners, alcohol use, detection of a curable sexually transmitted infection, and herpes simplex virus 2 serostatus. Point values for each factor ranged from 0 to 2, with a maximum possible total score of 11. Scores ≥5 were associated with HIV incidence >5 per 100 person-years and identified 91% of incident HIV infections from among only 64% of women. The area under the curve (AUC) for predictive ability of the score was 0.71 (95% confidence interval [CI]: 0.68 to 0.74), indicating good predictive ability. Risk score performance was generally similar with internal cross-validation (AUC = 0.69; 95% CI: 0.66 to 0.73) and external validation in HPTN 035 (AUC = 0.70; 95% CI: 0.65 to 0.75) and FEM-PrEP (AUC = 0.58; 95% CI: 0.51 to 0.65). A discrete set of characteristics that can be easily assessed in clinical and research settings was predictive of HIV acquisition over 1 year. The use of a validated risk score could improve efficiency of recruitment into HIV prevention research and inform scale-up of HIV prevention strategies in women at highest risk.
Wong, Hoong-Seam; Subramaniam, Shridevi; Alias, Zarifah; Taib, Nur Aishah; Ho, Gwo-Fuang; Ng, Char-Hong; Yip, Cheng-Har; Verkooijen, Helena M; Hartman, Mikael; Bhoo-Pathy, Nirmala
2015-02-01
Web-based prognostication tools may provide a simple and economically feasible option to aid prognostication and selection of chemotherapy in early breast cancers. We validated PREDICT, a free online breast cancer prognostication and treatment benefit tool, in a resource-limited setting. All 1480 patients who underwent complete surgical treatment for stages I to III breast cancer from 1998 to 2006 were identified from the prospective breast cancer registry of University Malaya Medical Centre, Kuala Lumpur, Malaysia. Calibration was evaluated by comparing the model-predicted overall survival (OS) with patients' actual OS. Model discrimination was tested using receiver-operating characteristic (ROC) analysis. Median age at diagnosis was 50 years. The median tumor size at presentation was 3 cm and 54% of patients had lymph node-negative disease. About 55% of women had estrogen receptor-positive breast cancer. Overall, the model-predicted 5 and 10-year OS was 86.3% and 77.5%, respectively, whereas the observed 5 and 10-year OS was 87.6% (difference: -1.3%) and 74.2% (difference: 3.3%), respectively; P values for goodness-of-fit test were 0.18 and 0.12, respectively. The program was accurate in most subgroups of patients, but significantly overestimated survival in patients aged <40 years, and in those receiving neoadjuvant chemotherapy. PREDICT performed well in terms of discrimination; areas under ROC curve were 0.78 (95% confidence interval [CI]: 0.74-0.81) and 0.73 (95% CI: 0.68-0.78) for 5 and 10-year OS, respectively. Based on its accurate performance in this study, PREDICT may be clinically useful in prognosticating women with breast cancer and personalizing breast cancer treatment in resource-limited settings.
Predictive Tools for Severe Dengue Conforming to World Health Organization 2009 Criteria
Carrasco, Luis R.; Leo, Yee Sin; Cook, Alex R.; Lee, Vernon J.; Thein, Tun L.; Go, Chi Jong; Lye, David C.
2014-01-01
Background Dengue causes 50 million infections per year, posing a large disease and economic burden in tropical and subtropical regions. Only a proportion of dengue cases require hospitalization, and predictive tools to triage dengue patients at greater risk of complications may optimize usage of limited healthcare resources. For severe dengue (SD), proposed by the World Health Organization (WHO) 2009 dengue guidelines, predictive tools are lacking. Methods We undertook a retrospective study of adult dengue patients in Tan Tock Seng Hospital, Singapore, from 2006 to 2008. Demographic, clinical and laboratory variables at presentation from dengue polymerase chain reaction-positive and serology-positive patients were used to predict the development of SD after hospitalization using generalized linear models (GLMs). Principal findings Predictive tools compatible with well-resourced and resource-limited settings – not requiring laboratory measurements – performed acceptably with optimism-corrected specificities of 29% and 27% respectively for 90% sensitivity. Higher risk of severe dengue (SD) was associated with female gender, lower than normal hematocrit level, abdominal distension, vomiting and fever on admission. Lower risk of SD was associated with more years of age (in a cohort with an interquartile range of 27–47 years of age), leucopenia and fever duration on admission. Among the warning signs proposed by WHO 2009, we found support for abdominal pain or tenderness and vomiting as predictors of combined forms of SD. Conclusions The application of these predictive tools in the clinical setting may reduce unnecessary admissions by 19% allowing the allocation of scarce public health resources to patients according to the severity of outcomes. PMID:25010515
The artificial membrane insert system as predictive tool for formulation performance evaluation.
Berben, Philippe; Brouwers, Joachim; Augustijns, Patrick
2018-02-15
In view of the increasing interest of pharmaceutical companies for cell- and tissue-free models to implement permeation into formulation testing, this study explored the capability of an artificial membrane insert system (AMI-system) as predictive tool to evaluate the performance of absorption-enabling formulations. Firstly, to explore the usefulness of the AMI-system in supersaturation assessment, permeation was monitored after induction of different degrees of loviride supersaturation. Secondly, to explore the usefulness of the AMI-system in formulation evaluation, a two-stage dissolution test was performed prior to permeation assessment. Different case examples were selected based on the availability of in vivo (intraluminal and systemic) data: (i) a suspension of posaconazole (Noxafil ® ), (ii) a cyclodextrin-based formulation of itraconazole (Sporanox ® ), and (iii) a micronized (Lipanthyl ® ) and nanosized (Lipanthylnano ® ) formulation of fenofibrate. The obtained results demonstrate that the AMI-system is able to capture the impact of loviride supersaturation on permeation. Furthermore, the AMI-system correctly predicted the effects of (i) formulation pH on posaconazole absorption, (ii) dilution on cyclodextrin-based itraconazole absorption, and (iii) food intake on fenofibrate absorption. Based on the applied in vivo/in vitro approach, the AMI-system combined with simple dissolution testing appears to be a time- and cost-effective tool for the early-stage evaluation of absorption-enabling formulations. Copyright © 2017 Elsevier B.V. All rights reserved.
Tohira, Hideo; Jacobs, Ian; Mountain, David; Gibson, Nick; Yeo, Allen
2011-01-01
The Abbreviated Injury Scale (AIS) was revised in 2005 and updated in 2008 (AIS 2008). We aimed to compare the outcome prediction performance of AIS-based injury severity scoring tools by using AIS 2008 and AIS 98. We used all major trauma patients hospitalized to the Royal Perth Hospital between 1994 and 2008. We selected five AIS-based injury severity scoring tools, including Injury Severity Score (ISS), New Injury Severity Score (NISS), modified Anatomic Profile (mAP), Trauma and Injury Severity Score (TRISS) and A Severity Characterization of Trauma (ASCOT). We selected survival after injury as a target outcome. We used the area under the Receiver Operating Characteristic curve (AUROC) as a performance measure. First, we compared the five tools using all cases whose records included all variables for the TRISS (complete dataset) using a 10-fold cross-validation. Second, we compared the ISS and NISS for AIS 98 and AIS 2008 using all subjects (whole dataset). We identified 1,269 and 4,174 cases for a complete dataset and a whole dataset, respectively. With the 10-fold cross-validation, there were no clear differences in the AUROCs between the AIS 98- and AIS 2008-based scores. With the second comparison, the AIS 98-based ISS performed significantly worse than the AIS 2008-based ISS (p<0.0001), while there was no significant difference between the AIS 98- and AIS 2008-based NISSs. Researchers should be aware of these findings when they select an injury severity scoring tool for their studies.
NASA Astrophysics Data System (ADS)
Ouyang, Qin; Chen, Quansheng; Zhao, Jiewen
2016-02-01
The approach presented herein reports the application of near infrared (NIR) spectroscopy, in contrast with human sensory panel, as a tool for estimating Chinese rice wine quality; concretely, to achieve the prediction of the overall sensory scores assigned by the trained sensory panel. Back propagation artificial neural network (BPANN) combined with adaptive boosting (AdaBoost) algorithm, namely BP-AdaBoost, as a novel nonlinear algorithm, was proposed in modeling. First, the optimal spectra intervals were selected by synergy interval partial least square (Si-PLS). Then, BP-AdaBoost model based on the optimal spectra intervals was established, called Si-BP-AdaBoost model. These models were optimized by cross validation, and the performance of each final model was evaluated according to correlation coefficient (Rp) and root mean square error of prediction (RMSEP) in prediction set. Si-BP-AdaBoost showed excellent performance in comparison with other models. The best Si-BP-AdaBoost model was achieved with Rp = 0.9180 and RMSEP = 2.23 in the prediction set. It was concluded that NIR spectroscopy combined with Si-BP-AdaBoost was an appropriate method for the prediction of the sensory quality in Chinese rice wine.
You Are Your Words: Modeling Students' Vocabulary Knowledge with Natural Language Processing Tools
ERIC Educational Resources Information Center
Allen, Laura K.; McNamara, Danielle S.
2015-01-01
The current study investigates the degree to which the lexical properties of students' essays can inform stealth assessments of their vocabulary knowledge. In particular, we used indices calculated with the natural language processing tool, TAALES, to predict students' performance on a measure of vocabulary knowledge. To this end, two corpora were…
Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time‐to‐Event Analysis
Gong, Xiajing; Hu, Meng
2018-01-01
Abstract Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time‐to‐event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high‐dimensional data featured by a large number of predictor variables. Our results showed that ML‐based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high‐dimensional data. The prediction performances of ML‐based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML‐based methods provide a powerful tool for time‐to‐event analysis, with a built‐in capacity for high‐dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. PMID:29536640
Galehdari, Hamid; Saki, Najmaldin; Mohammadi-Asl, Javad; Rahim, Fakher
2013-01-01
Crigler-Najjar syndrome (CNS) type I and type II are usually inherited as autosomal recessive conditions that result from mutations in the UGT1A1 gene. The main objective of the present review is to summarize results of all available evidence on the accuracy of SNP-based pathogenicity detection tools compared to published clinical result for the prediction of in nsSNPs that leads to disease using prediction performance method. A comprehensive search was performed to find all mutations related to CNS. Database searches included dbSNP, SNPdbe, HGMD, Swissvar, ensemble, and OMIM. All the mutation related to CNS was extracted. The pathogenicity prediction was done using SNP-based pathogenicity detection tools include SIFT, PHD-SNP, PolyPhen2, fathmm, Provean, and Mutpred. Overall, 59 different SNPs related to missense mutations in the UGT1A1 gene, were reviewed. Comparing the diagnostic OR, PolyPhen2 and Mutpred have the highest detection 4.983 (95% CI: 1.24 - 20.02) in both, following by SIFT (diagnostic OR: 3.25, 95% CI: 1.07 - 9.83). The highest MCC of SNP-based pathogenicity detection tools, was belong to SIFT (34.19%) followed by Provean, PolyPhen2, and Mutpred (29.99%, 29.89%, and 29.89%, respectively). Hence the highest SNP-based pathogenicity detection tools ACC, was fit to SIFT (62.71%) followed by PolyPhen2, and Mutpred (61.02%, in both). Our results suggest that some of the well-established SNP-based pathogenicity detection tools can appropriately reflect the role of a disease-associated SNP in both local and global structures.
Atema, Jasper J; Ram, Kim; Schultz, Marcus J; Boermeester, Marja A
Timely identification of patients in need of an intervention for abdominal sepsis after initial surgical management of secondary peritonitis is vital but complex. The aim of this study was to validate a decision tool for this purpose and to evaluate its potential to guide post-operative management. A prospective cohort study was conducted on consecutive adult patients undergoing surgery for secondary peritonitis in a single hospital. Assessments using the decision tool, based on one intra-operative and five post-operative variables, were performed on the second and third post-operative days and when the patients' clinical status deteriorated. Scores were compared with the clinical reference standard of persistent sepsis based on the clinical course or findings at imaging or surgery. Additionally, the potential of the decision tool to guide management in terms of diagnostic imaging in three previously defined score categories (low, intermediate, and high) was evaluated. A total of 161 assessments were performed in 69 patients. The majority of cases of secondary peritonitis (68%) were caused by perforation of the gastrointestinal tract. Post-operative persistent sepsis occurred in 28 patients. The discriminative capacity of the decision tool score was fair (area under the curve of the receiver operating characteristic = 0.79). The incidence rate differed significantly between the three score categories (p < 0.001). The negative predictive value of a decision tool score categorized as low probability was 89% (95% confidence interval [CI] 82-94) and 65% (95% CI 47-79) for an intermediate score. Diagnostic imaging was performed more frequently when there was an intermediate score than when the score was categorized as low (46% vs. 24%; p < 0.001). In patients operated on for secondary peritonitis, the decision tool score predicts with fair accuracy whether persistent sepsis is present.
Risk determination after an acute myocardial infarction: review of 3 clinical risk prediction tools.
Scruth, Elizabeth Ann; Page, Karen; Cheng, Eugene; Campbell, Michelle; Worrall-Carter, Linda
2012-01-01
The objective of the study was to provide comprehensive information for the clinical nurse specialist (CNS) on commonly used clinical prediction (risk assessment) tools used to estimate risk of a secondary cardiac or noncardiac event and mortality in patients undergoing primary percutaneous coronary intervention (PCI) for ST-elevation myocardial infarction (STEMI). The evolution and widespread adoption of primary PCI represent major advances in the treatment of acute myocardial infarction, specifically STEMI. The American College of Cardiology and the American Heart Association have recommended early risk stratification for patients presenting with acute coronary syndromes using several clinical risk scores to identify patients' mortality and secondary event risk after PCI. Clinical nurse specialists are integral to any performance improvement strategy. Their knowledge and understandings of clinical prediction tools will be essential in carrying out important assessment, identifying and managing risk in patients who have sustained a STEMI, and enhancing discharge education including counseling on medications and lifestyle changes. Over the past 2 decades, risk scores have been developed from clinical trials to facilitate risk assessment. There are several risk scores that can be used to determine in-hospital and short-term survival. This article critiques the most common tools: the Thrombolytic in Myocardial Infarction risk score, the Global Registry of Acute Coronary Events risk score, and the Controlled Abciximab and Device Investigation to Lower Late Angioplasty Complications risk score. The importance of incorporating risk screening assessment tools (that are important for clinical prediction models) to guide therapeutic management of patients cannot be underestimated. The ability to forecast secondary risk after a STEMI will assist in determining which patients would require the most aggressive level of treatment and monitoring postintervention including outpatient monitoring. With an increased awareness of specialist assessment tools, the CNS can play an important role in risk prevention and ongoing cardiovascular health promotion in patients diagnosed with STEMI. Knowledge of clinical prediction tools to estimate risk for mortality and risk of secondary events after PCI for acute coronary syndromes including STEMI is essential for the CNS in assisting with improving short- and long-term outcomes and for performance improvement strategies. The risk score assessment utilizing a collaborative approach with the multidisciplinary healthcare team provides for the development of a treatment plan including any invasive intervention strategy for the patient. Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins
Machine learning-based methods for prediction of linear B-cell epitopes.
Wang, Hsin-Wei; Pai, Tun-Wen
2014-01-01
B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.
DockBench as docking selector tool: the lesson learned from D3R Grand Challenge 2015
NASA Astrophysics Data System (ADS)
Salmaso, Veronica; Sturlese, Mattia; Cuzzolin, Alberto; Moro, Stefano
2016-09-01
Structure-based drug design (SBDD) has matured within the last two decades as a valuable tool for the optimization of low molecular weight lead compounds to highly potent drugs. The key step in SBDD requires knowledge of the three-dimensional structure of the target-ligand complex, which is usually determined by X-ray crystallography. In the absence of structural information for the complex, SBDD relies on the generation of plausible molecular docking models. However, molecular docking protocols suffer from inaccuracies in the description of the interaction energies between the ligand and the target molecule, and often fail in the prediction of the correct binding mode. In this context, the appropriate selection of the most accurate docking protocol is absolutely relevant for the final molecular docking result, even if addressing this point is absolutely not a trivial task. D3R Grand Challenge 2015 has represented a precious opportunity to test the performance of DockBench, an integrate informatics platform to automatically compare RMDS-based molecular docking performances of different docking/scoring methods. The overall performance resulted in the blind prediction are encouraging in particular for the pose prediction task, in which several complex were predicted with a sufficient accuracy for medicinal chemistry purposes.
DockBench as docking selector tool: the lesson learned from D3R Grand Challenge 2015.
Salmaso, Veronica; Sturlese, Mattia; Cuzzolin, Alberto; Moro, Stefano
2016-09-01
Structure-based drug design (SBDD) has matured within the last two decades as a valuable tool for the optimization of low molecular weight lead compounds to highly potent drugs. The key step in SBDD requires knowledge of the three-dimensional structure of the target-ligand complex, which is usually determined by X-ray crystallography. In the absence of structural information for the complex, SBDD relies on the generation of plausible molecular docking models. However, molecular docking protocols suffer from inaccuracies in the description of the interaction energies between the ligand and the target molecule, and often fail in the prediction of the correct binding mode. In this context, the appropriate selection of the most accurate docking protocol is absolutely relevant for the final molecular docking result, even if addressing this point is absolutely not a trivial task. D3R Grand Challenge 2015 has represented a precious opportunity to test the performance of DockBench, an integrate informatics platform to automatically compare RMDS-based molecular docking performances of different docking/scoring methods. The overall performance resulted in the blind prediction are encouraging in particular for the pose prediction task, in which several complex were predicted with a sufficient accuracy for medicinal chemistry purposes.
Leung, Alexander A; Keohane, Carol; Lipsitz, Stuart; Zimlichman, Eyal; Amato, Mary; Simon, Steven R; Coffey, Michael; Kaufman, Nathan; Cadet, Bismarck; Schiff, Gordon; Seger, Diane L; Bates, David W
2013-06-01
The Leapfrog CPOE evaluation tool has been promoted as a means of monitoring computerized physician order entry (CPOE). We sought to determine the relationship between Leapfrog scores and the rates of preventable adverse drug events (ADE) and potential ADE. A cross-sectional study of 1000 adult admissions in five community hospitals from October 1, 2008 to September 30, 2010 was performed. Observed rates of preventable ADE and potential ADE were compared with scores reported by the Leapfrog CPOE evaluation tool. The primary outcome was the rate of preventable ADE and the secondary outcome was the composite rate of preventable ADE and potential ADE. Leapfrog performance scores were highly related to the primary outcome. A 43% relative reduction in the rate of preventable ADE was predicted for every 5% increase in Leapfrog scores (rate ratio 0.57; 95% CI 0.37 to 0.88). In absolute terms, four fewer preventable ADE per 100 admissions were predicted for every 5% increase in overall Leapfrog scores (rate difference -4.2; 95% CI -7.4 to -1.1). A statistically significant relationship between Leapfrog scores and the secondary outcome, however, was not detected. Our findings support the use of the Leapfrog tool as a means of evaluating and monitoring CPOE performance after implementation, as addressed by current certification standards. Scores from the Leapfrog CPOE evaluation tool closely relate to actual rates of preventable ADE. Leapfrog testing may alert providers to potential vulnerabilities and highlight areas for further improvement.
Comparing the effectiveness of TWEAK and T-ACE in determining problem drinkers in pregnancy.
Sarkar, M; Einarson, T; Koren, G
2010-01-01
The TWEAK and T-ACE screening tools are validated methods of identifying problem drinking in a pregnant population. The objective of this study was to compare the effectiveness of the TWEAK and T-ACE screening tools in identifying problem drinking using traditional cut-points (CP). Study participants consisted of women calling the Motherisk Alcohol Helpline for information regarding their alcohol use in pregnancy. In this cohort, concerns surrounding underreporting are not likely as women self-report their alcohol consumption. Participant's self-identification, confirmed by her amount of alcohol use, determined whether she was a problem drinker or not. The TWEAK and T-ACE tools were administered on both groups and subsequent analysis was done to determine if one tool was more effective in predicting problem drinking. The study consisted of 75 problem and 100 non-problem drinkers. Using traditional CP, the TWEAK and T-ACE tools both performed similarly at identifying potential at-risk women (positive predictive value = 0.54), with very high sensitivity rates (100-99% and 100-93%, respectively) but poor specificity rates (36-43% and 19-34%, respectively). Upon comparison, there was no statistical difference in the effectiveness for one test performing better than next using either CP of 2 (P = 0.66) or CP of 3 (P = 0.38). Despite the lack of difference in performance, improved specificity associated with TWEAK suggests that it may be better suited to screen at-risk populations seeking advice from a helpline.
Hybrid and Electric Advanced Vehicle Systems Simulation
NASA Technical Reports Server (NTRS)
Beach, R. F.; Hammond, R. A.; Mcgehee, R. K.
1985-01-01
Predefined components connected to represent wide variety of propulsion systems. Hybrid and Electric Advanced Vehicle System (HEAVY) computer program is flexible tool for evaluating performance and cost of electric and hybrid vehicle propulsion systems. Allows designer to quickly, conveniently, and economically predict performance of proposed drive train.
NASA Astrophysics Data System (ADS)
Prasetyo, T.; Amar, S.; Arendra, A.; Zam Zami, M. K.
2018-01-01
This study develops an on-line detection system to predict the wear of DCMT070204 tool tip during the cutting process of the workpiece. The machine used in this research is CNC ProTurn 9000 to cut ST42 steel cylinder. The audio signal has been captured using the microphone placed in the tool post and recorded in Matlab. The signal is recorded at the sampling rate of 44.1 kHz, and the sampling size of 1024. The recorded signal is 110 data derived from the audio signal while cutting using a normal chisel and a worn chisel. And then perform signal feature extraction in the frequency domain using Fast Fourier Transform. Feature selection is done based on correlation analysis. And tool wear classification was performed using artificial neural networks with 33 input features selected. This artificial neural network is trained with back propagation method. Classification performance testing yields an accuracy of 74%.
Energy Economics of Farm Biogas in Cold Climates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pillay, Pragasen; Grimberg, Stefan; Powers, Susan E
Anaerobic digestion of farm and dairy waste has been shown to be capital intensive. One way to improve digester economics is to co-digest high-energy substrates together with the dairy manure. Cheese whey for example represents a high-energy substrate that is generated during cheese manufacture. There are currently no quantitative tools available that predict performance of co-digestion farm systems. The goal of this project was to develop a mathematical tool that would (1) predict the impact of co-digestion and (2) determine the best use of the generated biogas for a cheese manufacturing plant. Two models were developed that separately could bemore » used to meet both goals of the project. Given current pricing structures of the most economical use of the generated biogas at the cheese manufacturing plant was as a replacement of fuel oil to generate heat. The developed digester model accurately predicted the performance of 26 farm digesters operating in the North Eastern U.S.« less
Tank System Integrated Model: A Cryogenic Tank Performance Prediction Program
NASA Technical Reports Server (NTRS)
Bolshinskiy, L. G.; Hedayat, A.; Hastings, L. J.; Sutherlin, S. G.; Schnell, A. R.; Moder, J. P.
2017-01-01
Accurate predictions of the thermodynamic state of the cryogenic propellants, pressurization rate, and performance of pressure control techniques in cryogenic tanks are required for development of cryogenic fluid long-duration storage technology and planning for future space exploration missions. This Technical Memorandum (TM) presents the analytical tool, Tank System Integrated Model (TankSIM), which can be used for modeling pressure control and predicting the behavior of cryogenic propellant for long-term storage for future space missions. Utilizing TankSIM, the following processes can be modeled: tank self-pressurization, boiloff, ullage venting, mixing, and condensation on the tank wall. This TM also includes comparisons of TankSIM program predictions with the test data andexamples of multiphase mission calculations.
NASA Technical Reports Server (NTRS)
Likhanskii, Alexandre
2012-01-01
This report is the final report of a SBIR Phase I project. It is identical to the final report submitted, after some proprietary information of administrative nature has been removed. The development of a numerical simulation tool for dielectric barrier discharge (DBD) plasma actuator is reported. The objectives of the project were to analyze and predict DBD operation at wide range of ambient gas pressures. It overcomes the limitations of traditional DBD codes which are limited to low-speed applications and have weak prediction capabilities. The software tool allows DBD actuator analysis and prediction for subsonic to hypersonic flow regime. The simulation tool is based on the VORPAL code developed by Tech-X Corporation. VORPAL's capability of modeling DBD plasma actuator at low pressures (0.1 to 10 torr) using kinetic plasma modeling approach, and at moderate to atmospheric pressures (1 to 10 atm) using hydrodynamic plasma modeling approach, were demonstrated. In addition, results of experiments with pulsed+bias DBD configuration that were performed for validation purposes are reported.
Magarey, Roger; Newton, Leslie; Hong, Seung C.; Takeuchi, Yu; Christie, Dave; Jarnevich, Catherine S.; Kohl, Lisa; Damus, Martin; Higgins, Steven I.; Miller, Leah; Castro, Karen; West, Amanda; Hastings, John; Cook, Gericke; Kartesz, John; Koop, Anthony
2018-01-01
This study compares four models for predicting the potential distribution of non-indigenous weed species in the conterminous U.S. The comparison focused on evaluating modeling tools and protocols as currently used for weed risk assessment or for predicting the potential distribution of invasive weeds. We used six weed species (three highly invasive and three less invasive non-indigenous species) that have been established in the U.S. for more than 75 years. The experiment involved providing non-U. S. location data to users familiar with one of the four evaluated techniques, who then developed predictive models that were applied to the United States without knowing the identity of the species or its U.S. distribution. We compared a simple GIS climate matching technique known as Proto3, a simple climate matching tool CLIMEX Match Climates, the correlative model MaxEnt, and a process model known as the Thornley Transport Resistance (TTR) model. Two experienced users ran each modeling tool except TTR, which had one user. Models were trained with global species distribution data excluding any U.S. data, and then were evaluated using the current known U.S. distribution. The influence of weed species identity and modeling tool on prevalence and sensitivity effects was compared using a generalized linear mixed model. Each modeling tool itself had a low statistical significance, while weed species alone accounted for 69.1 and 48.5% of the variance for prevalence and sensitivity, respectively. These results suggest that simple modeling tools might perform as well as complex ones in the case of predicting potential distribution for a weed not yet present in the United States. Considerations of model accuracy should also be balanced with those of reproducibility and ease of use. More important than the choice of modeling tool is the construction of robust protocols and testing both new and experienced users under blind test conditions that approximate operational conditions.
NASA Technical Reports Server (NTRS)
Miller, David W.; Uebelhart, Scott A.; Blaurock, Carl
2004-01-01
This report summarizes work performed by the Space Systems Laboratory (SSL) for NASA Langley Research Center in the field of performance optimization for systems subject to uncertainty. The objective of the research is to develop design methods and tools to the aerospace vehicle design process which take into account lifecycle uncertainties. It recognizes that uncertainty between the predictions of integrated models and data collected from the system in its operational environment is unavoidable. Given the presence of uncertainty, the goal of this work is to develop means of identifying critical sources of uncertainty, and to combine these with the analytical tools used with integrated modeling. In this manner, system uncertainty analysis becomes part of the design process, and can motivate redesign. The specific program objectives were: 1. To incorporate uncertainty modeling, propagation and analysis into the integrated (controls, structures, payloads, disturbances, etc.) design process to derive the error bars associated with performance predictions. 2. To apply modern optimization tools to guide in the expenditure of funds in a way that most cost-effectively improves the lifecycle productivity of the system by enhancing the subsystem reliability and redundancy. The results from the second program objective are described. This report describes the work and results for the first objective: uncertainty modeling, propagation, and synthesis with integrated modeling.
Analysis of high vacuum systems using SINDA'85
NASA Technical Reports Server (NTRS)
Spivey, R. A.; Clanton, S. E.; Moore, J. D.
1993-01-01
The theory, algorithms, and test data correlation analysis of a math model developed to predict performance of the Space Station Freedom Vacuum Exhaust System are presented. The theory used to predict the flow characteristics of viscous, transition, and molecular flow is presented in detail. Development of user subroutines which predict the flow characteristics in conjunction with the SINDA'85/FLUINT analysis software are discussed. The resistance-capacitance network approach with application to vacuum system analysis is demonstrated and results from the model are correlated with test data. The model was developed to predict the performance of the Space Station Freedom Vacuum Exhaust System. However, the unique use of the user subroutines developed in this model and written into the SINDA'85/FLUINT thermal analysis model provides a powerful tool that can be used to predict the transient performance of vacuum systems and gas flow in tubes of virtually any geometry. This can be accomplished using a resistance-capacitance (R-C) method very similar to the methods used to perform thermal analyses.
Tiffin, Paul A; Mwandigha, Lazaro M; Paton, Lewis W; Hesselgreaves, H; McLachlan, John C; Finn, Gabrielle M; Kasim, Adetayo S
2016-09-26
The UK Clinical Aptitude Test (UKCAT) has been shown to have a modest but statistically significant ability to predict aspects of academic performance throughout medical school. Previously, this ability has been shown to be incremental to conventional measures of educational performance for the first year of medical school. This study evaluates whether this predictive ability extends throughout the whole of undergraduate medical study and explores the potential impact of using the test as a selection screening tool. This was an observational prospective study, linking UKCAT scores, prior educational attainment and sociodemographic variables with subsequent academic outcomes during the 5 years of UK medical undergraduate training. The participants were 6812 entrants to UK medical schools in 2007-8 using the UKCAT. The main outcome was academic performance at each year of medical school. A receiver operating characteristic (ROC) curve analysis was also conducted, treating the UKCAT as a screening test for a negative academic outcome (failing at least 1 year at first attempt). All four of the UKCAT scale scores significantly predicted performance in theory- and skills-based exams. After adjustment for prior educational achievement, the UKCAT scale scores remained significantly predictive for most years. Findings from the ROC analysis suggested that, if used as a sole screening test, with the mean applicant UKCAT score as the cut-off, the test could be used to reject candidates at high risk of failing at least 1 year at first attempt. However, the 'number needed to reject' value would be high (at 1.18), with roughly one candidate who would have been likely to pass all years at first sitting being rejected for every higher risk candidate potentially declined entry on this basis. The UKCAT scores demonstrate a statistically significant but modest degree of incremental predictive validity throughout undergraduate training. Whilst the UKCAT could be considered a fairly crude screening tool for future academic performance, it may offer added value when used in conjunction with other selection measures. Future work should focus on the optimum role of such tests within the selection process and the prediction of post-graduate performance.
Li, Fuyi; Li, Chen; Marquez-Lago, Tatiana T; Leier, André; Akutsu, Tatsuya; Purcell, Anthony W; Smith, A Ian; Lithgow, Trevor; Daly, Roger J; Song, Jiangning; Chou, Kuo-Chen
2018-06-27
Kinase-regulated phosphorylation is a ubiquitous type of post-translational modification (PTM) in both eukaryotic and prokaryotic cells. Phosphorylation plays fundamental roles in many signalling pathways and biological processes, such as protein degradation and protein-protein interactions. Experimental studies have revealed that signalling defects caused by aberrant phosphorylation are highly associated with a variety of human diseases, especially cancers. In light of this, a number of computational methods aiming to accurately predict protein kinase family-specific or kinase-specific phosphorylation sites have been established, thereby facilitating phosphoproteomic data analysis. In this work, we present Quokka, a novel bioinformatics tool that allows users to rapidly and accurately identify human kinase family-regulated phosphorylation sites. Quokka was developed by using a variety of sequence scoring functions combined with an optimized logistic regression algorithm. We evaluated Quokka based on well-prepared up-to-date benchmark and independent test datasets, curated from the Phospho.ELM and UniProt databases, respectively. The independent test demonstrates that Quokka improves the prediction performance compared with state-of-the-art computational tools for phosphorylation prediction. In summary, our tool provides users with high-quality predicted human phosphorylation sites for hypothesis generation and biological validation. The Quokka webserver and datasets are freely available at http://quokka.erc.monash.edu/. Supplementary data are available at Bioinformatics online.
Risk Prediction Models for Acute Kidney Injury in Critically Ill Patients: Opus in Progressu.
Neyra, Javier A; Leaf, David E
2018-05-31
Acute kidney injury (AKI) is a complex systemic syndrome associated with high morbidity and mortality. Among critically ill patients admitted to intensive care units (ICUs), the incidence of AKI is as high as 50% and is associated with dismal outcomes. Thus, the development and validation of clinical risk prediction tools that accurately identify patients at high risk for AKI in the ICU is of paramount importance. We provide a comprehensive review of 3 clinical risk prediction tools that have been developed for incident AKI occurring in the first few hours or days following admission to the ICU. We found substantial heterogeneity among the clinical variables that were examined and included as significant predictors of AKI in the final models. The area under the receiver operating characteristic curves was ∼0.8 for all 3 models, indicating satisfactory model performance, though positive predictive values ranged from only 23 to 38%. Hence, further research is needed to develop more accurate and reproducible clinical risk prediction tools. Strategies for improved assessment of AKI susceptibility in the ICU include the incorporation of dynamic (time-varying) clinical parameters, as well as biomarker, functional, imaging, and genomic data. © 2018 S. Karger AG, Basel.
Measurements and Predictions for a Distributed Exhaust Nozzle
NASA Technical Reports Server (NTRS)
Kinzie, Kevin W.; Brown, Martha C.; Schein, David B.; Solomon, W. David, Jr.
2001-01-01
The acoustic and aerodynamic performance characteristics of a distributed exhaust nozzle (DEN) design concept were evaluated experimentally and analytically with the purpose of developing a design methodology for developing future DEN technology. Aerodynamic and acoustic measurements were made to evaluate the DEN performance and the CFD design tool. While the CFD approach did provide an excellent prediction of the flowfield and aerodynamic performance characteristics of the DEN and 2D reference nozzle, the measured acoustic suppression potential of this particular DEN was low. The measurements and predictions indicated that the mini-exhaust jets comprising the distributed exhaust coalesced back into a single stream jet very shortly after leaving the nozzles. Even so, the database provided here will be useful for future distributed exhaust designs with greater noise reduction and aerodynamic performance potential.
BRCA1/2 missense mutations and the value of in-silico analyses.
Sadowski, Carolin E; Kohlstedt, Daniela; Meisel, Cornelia; Keller, Katja; Becker, Kerstin; Mackenroth, Luisa; Rump, Andreas; Schröck, Evelin; Wimberger, Pauline; Kast, Karin
2017-11-01
The clinical implications of genetic variants in BRCA1/2 in healthy and affected individuals are considerable. Variant interpretation, however, is especially challenging for missense variants. The majority of them are classified as variants of unknown clinical significance (VUS). Computational (in-silico) predictive programs are easy to access, but represent only one tool out of a wide range of complemental approaches to classify VUS. With this single-center study, we aimed to evaluate the impact of in-silico analyses in a spectrum of different BRCA1/2 missense variants. We conducted mutation analysis of BRCA1/2 in 523 index patients with suspected hereditary breast and ovarian cancer (HBOC). Classification of the genetic variants was performed according to the German Consortium (GC)-HBOC database. Additionally, all missense variants were classified by the following three in-silico prediction tools: SIFT, Mutation Taster (MT2) and PolyPhen2 (PPH2). Overall 201 different variants, 68 of which constituted missense variants were ranked as pathogenic, neutral, or unknown. The classification of missense variants by in-silico tools resulted in a higher amount of pathogenic mutations (25% vs. 13.2%) compared to the GC-HBOC-classification. Altogether, more than fifty percent (38/68, 55.9%) of missense variants were ranked differently. Sensitivity of in-silico-tools for mutation prediction was 88.9% (PPH2), 100% (SIFT) and 100% (MT2). We found a relevant discrepancy in variant classification by using in-silico prediction tools, resulting in potential overestimation and/or underestimation of cancer risk. More reliable, notably gene-specific, prediction tools and functional tests are needed to improve clinical counseling. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Pal, Parimal; Thakura, Ritwik; Chakrabortty, Sankha
2016-05-01
A user-friendly, menu-driven simulation software tool has been developed for the first time to optimize and analyze the system performance of an advanced continuous membrane-integrated pharmaceutical wastewater treatment plant. The software allows pre-analysis and manipulation of input data which helps in optimization and shows the software performance visually on a graphical platform. Moreover, the software helps the user to "visualize" the effects of the operating parameters through its model-predicted output profiles. The software is based on a dynamic mathematical model, developed for a systematically integrated forward osmosis-nanofiltration process for removal of toxic organic compounds from pharmaceutical wastewater. The model-predicted values have been observed to corroborate well with the extensive experimental investigations which were found to be consistent under varying operating conditions like operating pressure, operating flow rate, and draw solute concentration. Low values of the relative error (RE = 0.09) and high values of Willmott-d-index (d will = 0.981) reflected a high degree of accuracy and reliability of the software. This software is likely to be a very efficient tool for system design or simulation of an advanced membrane-integrated treatment plant for hazardous wastewater.
GeneSCF: a real-time based functional enrichment tool with support for multiple organisms.
Subhash, Santhilal; Kanduri, Chandrasekhar
2016-09-13
High-throughput technologies such as ChIP-sequencing, RNA-sequencing, DNA sequencing and quantitative metabolomics generate a huge volume of data. Researchers often rely on functional enrichment tools to interpret the biological significance of the affected genes from these high-throughput studies. However, currently available functional enrichment tools need to be updated frequently to adapt to new entries from the functional database repositories. Hence there is a need for a simplified tool that can perform functional enrichment analysis by using updated information directly from the source databases such as KEGG, Reactome or Gene Ontology etc. In this study, we focused on designing a command-line tool called GeneSCF (Gene Set Clustering based on Functional annotations), that can predict the functionally relevant biological information for a set of genes in a real-time updated manner. It is designed to handle information from more than 4000 organisms from freely available prominent functional databases like KEGG, Reactome and Gene Ontology. We successfully employed our tool on two of published datasets to predict the biologically relevant functional information. The core features of this tool were tested on Linux machines without the need for installation of more dependencies. GeneSCF is more reliable compared to other enrichment tools because of its ability to use reference functional databases in real-time to perform enrichment analysis. It is an easy-to-integrate tool with other pipelines available for downstream analysis of high-throughput data. More importantly, GeneSCF can run multiple gene lists simultaneously on different organisms thereby saving time for the users. Since the tool is designed to be ready-to-use, there is no need for any complex compilation and installation procedures.
A Consensus Method for the Prediction of ‘Aggregation-Prone’ Peptides in Globular Proteins
Tsolis, Antonios C.; Papandreou, Nikos C.; Iconomidou, Vassiliki A.; Hamodrakas, Stavros J.
2013-01-01
The purpose of this work was to construct a consensus prediction algorithm of ‘aggregation-prone’ peptides in globular proteins, combining existing tools. This allows comparison of the different algorithms and the production of more objective and accurate results. Eleven (11) individual methods are combined and produce AMYLPRED2, a publicly, freely available web tool to academic users (http://biophysics.biol.uoa.gr/AMYLPRED2), for the consensus prediction of amyloidogenic determinants/‘aggregation-prone’ peptides in proteins, from sequence alone. The performance of AMYLPRED2 indicates that it functions better than individual aggregation-prediction algorithms, as perhaps expected. AMYLPRED2 is a useful tool for identifying amyloid-forming regions in proteins that are associated with several conformational diseases, called amyloidoses, such as Altzheimer's, Parkinson's, prion diseases and type II diabetes. It may also be useful for understanding the properties of protein folding and misfolding and for helping to the control of protein aggregation/solubility in biotechnology (recombinant proteins forming bacterial inclusion bodies) and biotherapeutics (monoclonal antibodies and biopharmaceutical proteins). PMID:23326595
Analysis of Orbital Lifetime Prediction Parameters in Preparation for Post-Mission Disposal
NASA Astrophysics Data System (ADS)
Choi, Ha-Yeon; Kim, Hae-Dong; Seong, Jae-Dong
2015-12-01
Atmospheric drag force is an important source of perturbation of Low Earth Orbit (LEO) orbit satellites, and solar activity is a major factor for changes in atmospheric density. In particular, the orbital lifetime of a satellite varies with changes in solar activity, so care must be taken in predicting the remaining orbital lifetime during preparation for post-mission disposal. In this paper, the System Tool Kit (STK®) Long-term Orbit Propagator is used to analyze the changes in orbital lifetime predictions with respect to solar activity. In addition, the STK® Lifetime tool is used to analyze the change in orbital lifetime with respect to solar flux data generation, which is needed for the orbital lifetime calculation, and its control on the drag coefficient control. Analysis showed that the application of the most recent solar flux file within the Lifetime tool gives a predicted trend that is closest to the actual orbit. We also examine the effect of the drag coefficient, by performing a comparative analysis between varying and constant coefficients in terms of solar activity intensities.
Myles, Puja R; Nguyen-Van-Tam, Jonathan S; Lim, Wei Shen; Nicholson, Karl G; Brett, Stephen J; Enstone, Joanne E; McMenamin, James; Openshaw, Peter J M; Read, Robert C; Taylor, Bruce L; Bannister, Barbara; Semple, Malcolm G
2012-01-01
Triage tools have an important role in pandemics to identify those most likely to benefit from higher levels of care. We compared Community Assessment Tools (CATs), the CURB-65 score, and the Pandemic Medical Early Warning Score (PMEWS); to predict higher levels of care (high dependency--Level 2 or intensive care--Level 3) and/or death in patients at or shortly after admission to hospital with A/H1N1 2009 pandemic influenza. This was a case-control analysis using retrospectively collected data from the FLU-CIN cohort (1040 adults, 480 children) with PCR-confirmed A/H1N1 2009 influenza. Area under receiver operator curves (AUROC), sensitivity, specificity, positive predictive values and negative predictive values were calculated. CATs best predicted Level 2/3 admissions in both adults [AUROC (95% CI): CATs 0.77 (0.73, 0.80); CURB-65 0.68 (0.64, 0.72); PMEWS 0.68 (0.64, 0.73), p<0.001] and children [AUROC: CATs 0.74 (0.68, 0.80); CURB-65 0.52 (0.46, 0.59); PMEWS 0.69 (0.62, 0.75), p<0.001]. CURB-65 and CATs were similar in predicting death in adults with both performing better than PMEWS; and CATs best predicted death in children. CATs were the best predictor of Level 2/3 care and/or death for both adults and children. CATs are potentially useful triage tools for predicting need for higher levels of care and/or mortality in patients of all ages.
NASA Astrophysics Data System (ADS)
Boy, M.; Yaşar, N.; Çiftçi, İ.
2016-11-01
In recent years, turning of hardened steels has replaced grinding for finishing operations. This process is compared to grinding operations; hard turning has higher material removal rates, the possibility of greater process flexibility, lower equipment costs, and shorter setup time. CBN or ceramic cutting tools are widely used hard part machining. For successful application of hard turning, selection of suitable cutting parameters for a given cutting tool is an important step. For this purpose, an experimental investigation was conducted to determine the effects of cutting tool edge geometry, feed rate and cutting speed on surface roughness and resultant cutting force in hard turning of AISI H13 steel with ceramic cutting tools. Machining experiments were conducted in a CNC lathe based on Taguchi experimental design (L16) in different levels of cutting parameters. In the experiments, a Kistler 9257 B, three cutting force components (Fc, Ff and Fr) piezoelectric dynamometer was used to measure cutting forces. Surface roughness measurements were performed by using a Mahrsurf PS1 device. For statistical analysis, analysis of variance has been performed and mathematical model have been developed for surface roughness and resultant cutting forces. The analysis of variance results showed that the cutting edge geometry, cutting speed and feed rate were the most significant factors on resultant cutting force while the cutting edge geometry and feed rate were the most significant factor for the surface roughness. The regression analysis was applied to predict the outcomes of the experiment. The predicted values and measured values were very close to each other. Afterwards a confirmation tests were performed to make a comparison between the predicted results and the measured results. According to the confirmation test results, measured values are within the 95% confidence interval.
NASA Technical Reports Server (NTRS)
Brock, Joseph M; Stern, Eric
2016-01-01
Dynamic CFD simulations of the SIAD ballistic test model were performed using US3D flow solver. Motivation for performing these simulations is for the purpose of validation and verification of the US3D flow solver as a viable computational tool for predicting dynamic coefficients.
Zhang, Cunji; Yao, Xifan; Zhang, Jianming; Jin, Hong
2016-01-01
Tool breakage causes losses of surface polishing and dimensional accuracy for machined part, or possible damage to a workpiece or machine. Tool Condition Monitoring (TCM) is considerably vital in the manufacturing industry. In this paper, an indirect TCM approach is introduced with a wireless triaxial accelerometer. The vibrations in the three vertical directions (x, y and z) are acquired during milling operations, and the raw signals are de-noised by wavelet analysis. These features of de-noised signals are extracted in the time, frequency and time–frequency domains. The key features are selected based on Pearson’s Correlation Coefficient (PCC). The Neuro-Fuzzy Network (NFN) is adopted to predict the tool wear and Remaining Useful Life (RUL). In comparison with Back Propagation Neural Network (BPNN) and Radial Basis Function Network (RBFN), the results show that the NFN has the best performance in the prediction of tool wear and RUL. PMID:27258277
Sathiyamoorthy, V; Sekar, T; Elango, N
2015-01-01
Formation of spikes prevents achievement of the better material removal rate (MRR) and surface finish while using plain NaNO3 aqueous electrolyte in electrochemical machining (ECM) of die tool steel. Hence this research work attempts to minimize the formation of spikes in the selected workpiece of high carbon high chromium die tool steel using copper nanoparticles suspended in NaNO3 aqueous electrolyte, that is, nanofluid. The selected influencing parameters are applied voltage and electrolyte discharge rate with three levels and tool feed rate with four levels. Thirty-six experiments were designed using Design Expert 7.0 software and optimization was done using multiobjective genetic algorithm (MOGA). This tool identified the best possible combination for achieving the better MRR and surface roughness. The results reveal that voltage of 18 V, tool feed rate of 0.54 mm/min, and nanofluid discharge rate of 12 lit/min would be the optimum values in ECM of HCHCr die tool steel. For checking the optimality obtained from the MOGA in MATLAB software, the maximum MRR of 375.78277 mm(3)/min and respective surface roughness Ra of 2.339779 μm were predicted at applied voltage of 17.688986 V, tool feed rate of 0.5399705 mm/min, and nanofluid discharge rate of 11.998816 lit/min. Confirmatory tests showed that the actual performance at the optimum conditions was 361.214 mm(3)/min and 2.41 μm; the deviation from the predicted performance is less than 4% which proves the composite desirability of the developed models.
Eksborg, Staffan
2013-01-01
Pharmacokinetic studies are important for optimizing of drug dosing, but requires proper validation of the used pharmacokinetic procedures. However, simple and reliable statistical methods suitable for evaluation of the predictive performance of pharmacokinetic analysis are essentially lacking. The aim of the present study was to construct and evaluate a graphic procedure for quantification of predictive performance of individual and population pharmacokinetic compartment analysis. Original data from previously published pharmacokinetic compartment analyses after intravenous, oral, and epidural administration, and digitized data, obtained from published scatter plots of observed vs predicted drug concentrations from population pharmacokinetic studies using the NPEM algorithm and NONMEM computer program and Bayesian forecasting procedures, were used for estimating the predictive performance according to the proposed graphical method and by the method of Sheiner and Beal. The graphical plot proposed in the present paper proved to be a useful tool for evaluation of predictive performance of both individual and population compartment pharmacokinetic analysis. The proposed method is simple to use and gives valuable information concerning time- and concentration-dependent inaccuracies that might occur in individual and population pharmacokinetic compartment analysis. Predictive performance can be quantified by the fraction of concentration ratios within arbitrarily specified ranges, e.g. within the range 0.8-1.2.
Tohira, Hideo; Jacobs, Ian; Mountain, David; Gibson, Nick; Yeo, Allen
2011-01-01
The Abbreviated Injury Scale (AIS) was revised in 2005 and updated in 2008 (AIS 2008). We aimed to compare the outcome prediction performance of AIS-based injury severity scoring tools by using AIS 2008 and AIS 98. We used all major trauma patients hospitalized to the Royal Perth Hospital between 1994 and 2008. We selected five AIS-based injury severity scoring tools, including Injury Severity Score (ISS), New Injury Severity Score (NISS), modified Anatomic Profile (mAP), Trauma and Injury Severity Score (TRISS) and A Severity Characterization of Trauma (ASCOT). We selected survival after injury as a target outcome. We used the area under the Receiver Operating Characteristic curve (AUROC) as a performance measure. First, we compared the five tools using all cases whose records included all variables for the TRISS (complete dataset) using a 10-fold cross-validation. Second, we compared the ISS and NISS for AIS 98 and AIS 2008 using all subjects (whole dataset). We identified 1,269 and 4,174 cases for a complete dataset and a whole dataset, respectively. With the 10-fold cross-validation, there were no clear differences in the AUROCs between the AIS 98- and AIS 2008-based scores. With the second comparison, the AIS 98-based ISS performed significantly worse than the AIS 2008-based ISS (p<0.0001), while there was no significant difference between the AIS 98- and AIS 2008-based NISSs. Researchers should be aware of these findings when they select an injury severity scoring tool for their studies. PMID:22105401
Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool
NASA Astrophysics Data System (ADS)
Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo
2017-05-01
Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.
NASA Astrophysics Data System (ADS)
Haack, Lukas; Peniche, Ricardo; Sommer, Lutz; Kather, Alfons
2017-06-01
At early project stages, the main CSP plant design parameters such as turbine capacity, solar field size, and thermal storage capacity are varied during the techno-economic optimization to determine most suitable plant configurations. In general, a typical meteorological year with at least hourly time resolution is used to analyze each plant configuration. Different software tools are available to simulate the annual energy yield. Software tools offering a thermodynamic modeling approach of the power block and the CSP thermal cycle, such as EBSILONProfessional®, allow a flexible definition of plant topologies. In EBSILON, the thermodynamic equilibrium for each time step is calculated iteratively (quasi steady state), which requires approximately 45 minutes to process one year with hourly time resolution. For better presentation of gradients, 10 min time resolution is recommended, which increases processing time by a factor of 5. Therefore, analyzing a large number of plant sensitivities, as required during the techno-economic optimization procedure, the detailed thermodynamic simulation approach becomes impracticable. Suntrace has developed an in-house CSP-Simulation tool (CSPsim), based on EBSILON and applying predictive models, to approximate the CSP plant performance for central receiver and parabolic trough technology. CSPsim significantly increases the speed of energy yield calculations by factor ≥ 35 and has automated the simulation run of all predefined design configurations in sequential order during the optimization procedure. To develop the predictive models, multiple linear regression techniques and Design of Experiment methods are applied. The annual energy yield and derived LCOE calculated by the predictive model deviates less than ±1.5 % from the thermodynamic simulation in EBSILON and effectively identifies the optimal range of main design parameters for further, more specific analysis.
NASA's Aeroacoustic Tools and Methods for Analysis of Aircraft Noise
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Lopes, Leonard V.; Burley, Casey L.
2015-01-01
Aircraft community noise is a significant concern due to continued growth in air traffic, increasingly stringent environmental goals, and operational limitations imposed by airport authorities. The ability to quantify aircraft noise at the source and ultimately at observers is required to develop low noise aircraft designs and flight procedures. Predicting noise at the source, accounting for scattering and propagation through the atmosphere to the observer, and assessing the perception and impact on a community requires physics-based aeroacoustics tools. Along with the analyses for aero-performance, weights and fuel burn, these tools can provide the acoustic component for aircraft MDAO (Multidisciplinary Design Analysis and Optimization). Over the last decade significant progress has been made in advancing the aeroacoustic tools such that acoustic analyses can now be performed during the design process. One major and enabling advance has been the development of the system noise framework known as Aircraft NOise Prediction Program2 (ANOPP2). ANOPP2 is NASA's aeroacoustic toolset and is designed to facilitate the combination of acoustic approaches of varying fidelity for the analysis of noise from conventional and unconventional aircraft. The toolset includes a framework that integrates noise prediction and propagation methods into a unified system for use within general aircraft analysis software. This includes acoustic analyses, signal processing and interfaces that allow for the assessment of perception of noise on a community. ANOPP2's capability to incorporate medium fidelity shielding predictions and wind tunnel experiments into a design environment is presented. An assessment of noise from a conventional and Hybrid Wing Body (HWB) aircraft using medium fidelity scattering methods combined with noise measurements from a model-scale HWB recently placed in NASA's 14x22 wind tunnel are presented. The results are in the form of community noise metrics and auralizations.
Subramanyam, Rajeev; Yeramaneni, Samrat; Hossain, Mohamed Monir; Anneken, Amy M; Varughese, Anna M
2016-05-01
Perioperative respiratory adverse events (PRAEs) are the most common cause of serious adverse events in children receiving anesthesia. Our primary aim of this study was to develop and validate a risk prediction tool for the occurrence of PRAE from the onset of anesthesia induction until discharge from the postanesthesia care unit in children younger than 18 years undergoing elective ambulatory anesthesia for surgery and radiology. The incidence of PRAE was studied. We analyzed data from 19,059 patients from our department's quality improvement database. The predictor variables were age, sex, ASA physical status, morbid obesity, preexisting pulmonary disorder, preexisting neurologic disorder, and location of ambulatory anesthesia (surgery or radiology). Composite PRAE was defined as the presence of any 1 of the following events: intraoperative bronchospasm, intraoperative laryngospasm, postoperative apnea, postoperative laryngospasm, postoperative bronchospasm, or postoperative prolonged oxygen requirement. Development and validation of the risk prediction tool for PRAE were performed using a split sampling technique to split the database into 2 independent cohorts based on the year when the patient received ambulatory anesthesia for surgery and radiology using logistic regression. A risk score was developed based on the regression coefficients from the validation tool. The performance of the risk prediction tool was assessed by using tests of discrimination and calibration. The overall incidence of composite PRAE was 2.8%. The derivation cohort included 8904 patients, and the validation cohort included 10,155 patients. The risk of PRAE was 3.9% in the development cohort and 1.8% in the validation cohort. Age ≤ 3 years (versus >3 years), ASA physical status II or III (versus ASA physical status I), morbid obesity, preexisting pulmonary disorder, and surgery (versus radiology) significantly predicted the occurrence of PRAE in a multivariable logistic regression model. A risk score in the range of 0 to 3 was assigned to each significant variable in the logistic regression model, and final score for all risk factors ranged from 0 to 11. A cutoff score of 4 was derived from a receiver operating characteristic curve to determine the high-risk category. The model C-statistic and the corresponding SE for the derivation and validation cohort was 0.64 ± 0.01 and 0.63 ± 0.02, respectively. Sensitivity and SE of the risk prediction tool to identify children at risk for PRAE was 77.6 ± 0.02 in the derivation cohort and 76.2 ± 0.03 in the validation cohort. The risk tool developed and validated from our study cohort identified 5 risk factors: age ≤ 3 years (versus >3 years), ASA physical status II and III (versus ASA physical status I), morbid obesity, preexisting pulmonary disorder, and surgery (versus radiology) for PRAE. This tool can be used to provide an individual risk score for each patient to predict the risk of PRAE in the preoperative period.
Development and in-flight performance of the Mariner 9 spacecraft propulsion system
NASA Technical Reports Server (NTRS)
Evans, D. D.; Cannova, R. D.; Cork, M. J.
1972-01-01
On November 14, 1971, Mariner 9 was decelerated into orbit about Mars by a 1334-newton (300-lbf) liquid bipropellant propulsion system. The development and in-flight performance are described and summarized of this pressure-fed, nitrogen tetroxide/monomethyl hydrazine bipropellant system. The design of all Mariner propulsion subsystems has been predicated upon the premise that simplicity of approach, coupled with thorough qualification and margin-limits testing, is the key to cost-effective reliability. The qualification test program and analytical modeling of the Mariner 9 subsystem are discussed. Since the propulsion subsystem is modular in nature, it was completely checked, serviced, and tested independent of the spacecraft. Proper prediction of in-flight performance required the development of three significant modeling tools to predict and account for nitrogen saturation of the propellant during the six-month coast period and to predict and statistically analyze in-flight data. The flight performance of the subsystem was excellent, as were the performance prediction correlations. These correlations are presented.
Predicting the Functional Impact of KCNQ1 Variants of Unknown Significance.
Li, Bian; Mendenhall, Jeffrey L; Kroncke, Brett M; Taylor, Keenan C; Huang, Hui; Smith, Derek K; Vanoye, Carlos G; Blume, Jeffrey D; George, Alfred L; Sanders, Charles R; Meiler, Jens
2017-10-01
An emerging standard-of-care for long-QT syndrome uses clinical genetic testing to identify genetic variants of the KCNQ1 potassium channel. However, interpreting results from genetic testing is confounded by the presence of variants of unknown significance for which there is inadequate evidence of pathogenicity. In this study, we curated from the literature a high-quality set of 107 functionally characterized KCNQ1 variants. Based on this data set, we completed a detailed quantitative analysis on the sequence conservation patterns of subdomains of KCNQ1 and the distribution of pathogenic variants therein. We found that conserved subdomains generally are critical for channel function and are enriched with dysfunctional variants. Using this experimentally validated data set, we trained a neural network, designated Q1VarPred, specifically for predicting the functional impact of KCNQ1 variants of unknown significance. The estimated predictive performance of Q1VarPred in terms of Matthew's correlation coefficient and area under the receiver operating characteristic curve were 0.581 and 0.884, respectively, superior to the performance of 8 previous methods tested in parallel. Q1VarPred is publicly available as a web server at http://meilerlab.org/q1varpred. Although a plethora of tools are available for making pathogenicity predictions over a genome-wide scale, previous tools fail to perform in a robust manner when applied to KCNQ1. The contrasting and favorable results for Q1VarPred suggest a promising approach, where a machine-learning algorithm is tailored to a specific protein target and trained with a functionally validated data set to calibrate informatics tools. © 2017 American Heart Association, Inc.
Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z
2009-05-01
Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.
Simulation of the Simbol-X telescope: imaging performance of a deformable x-ray telescope
NASA Astrophysics Data System (ADS)
Chauvin, Maxime; Roques, Jean-Pierre
2009-08-01
We have developed a simulation tool for a Wolter I telescope subject to deformations. The aim is to understand and predict the behavior of Simbol-X and other future missions (NuSTAR, Astro-H, IXO, ...). Our code, based on Monte-Carlo ray-tracing, computes the full photon trajectories up to the detector plane, along with the deformations. The degradation of the imaging system is corrected using metrology. This tool allows to perform many analyzes in order to optimize the configuration of any of these telescopes.
A computational model that predicts behavioral sensitivity to intracortical microstimulation
Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J.
2016-01-01
Objective Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. Approach We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Main results Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber's law. Significance The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics. PMID:27977419
A computational model that predicts behavioral sensitivity to intracortical microstimulation.
Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J
2017-02-01
Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R 2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber's law. The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics.
A computational model that predicts behavioral sensitivity to intracortical microstimulation
NASA Astrophysics Data System (ADS)
Kim, Sungshin; Callier, Thierri; Bensmaia, Sliman J.
2017-02-01
Objective. Intracortical microstimulation (ICMS) is a powerful tool to investigate the neural mechanisms of perception and can be used to restore sensation for patients who have lost it. While sensitivity to ICMS has previously been characterized, no systematic framework has been developed to summarize the detectability of individual ICMS pulse trains or the discriminability of pairs of pulse trains. Approach. We develop a simple simulation that describes the responses of a population of neurons to a train of electrical pulses delivered through a microelectrode. We then perform an ideal observer analysis on the simulated population responses to predict the behavioral performance of non-human primates in ICMS detection and discrimination tasks. Main results. Our computational model can predict behavioral performance across a wide range of stimulation conditions with high accuracy (R 2 = 0.97) and generalizes to novel ICMS pulse trains that were not used to fit its parameters. Furthermore, the model provides a theoretical basis for the finding that amplitude discrimination based on ICMS violates Weber’s law. Significance. The model can be used to characterize the sensitivity to ICMS across the range of perceptible and safe stimulation regimes. As such, it will be a useful tool for both neuroscience and neuroprosthetics.
Is there a link between the crafting of tools and the evolution of cognition?
Taylor, Alex H; Gray, Russell D
2014-11-01
The ability to craft tools is one of the defining features of our species. The technical intelligence hypothesis predicts that tool-making species should have enhanced physical cognition. Here we review how the physical problem-solving performance of tool-making apes and corvids compares to closely related species. We conclude that, while some performance differences have been found, overall the evidence is at best equivocal. We argue that increased sample sizes, novel experimental designs, and a signature-testing approach are required to determine the effect tool crafting has on the evolution of intelligence. WIREs Cogn Sci 2014, 5:693-703. doi: 10.1002/wcs.1322 For further resources related to this article, please visit the WIREs website. The authors have declared no conflicts of interest for this article. © 2014 The Authors. WIREs Cognitive Science published by John Wiley & Sons, Ltd.
Aguilar, María Esther Urrutia; Rosas, Efrén Raúl Ponce; León, Silvia Ortiz; Ochoa, Laura Peñaloza; Guzmán, Rosalinda Guevara
2017-01-01
To identify and compare the predictive agents associated with medical students´ academic performance that are undertaking cellular biology and human histology, as well as those physiotherapists that take molecular, cellular and tissue biology. An academic follow up was carried out during school. Tools on previous knowledge, vocation, psychological and confrontational means were applied at the beginning of the school year; and the last two were applied two more times afterwards. Data were analyzed considering descriptive, comparative, correlational and predictive statistics. The students´ participation was voluntary and data confidentiality was looked after. Copyright: © 2017 SecretarÍa de Salud
A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules.
Ramakrishnan, Sridhar; Wesensten, Nancy J; Balkin, Thomas J; Reifman, Jaques
2016-01-01
Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss-from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges-and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. © 2016 Associated Professional Sleep Societies, LLC.
A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules
Ramakrishnan, Sridhar; Wesensten, Nancy J.; Balkin, Thomas J.; Reifman, Jaques
2016-01-01
Study Objectives: Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss—from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges—and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. Methods: We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. Results: The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. Conclusions: The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. Citation: Ramakrishnan S, Wesensten NJ, Balkin TJ, Reifman J. A unified model of performance: validation of its predictions across different sleep/wake schedules. SLEEP 2016;39(1):249–262. PMID:26518594
BUSCA: an integrative web server to predict subcellular localization of proteins.
Savojardo, Castrense; Martelli, Pier Luigi; Fariselli, Piero; Profiti, Giuseppe; Casadio, Rita
2018-04-30
Here, we present BUSCA (http://busca.biocomp.unibo.it), a novel web server that integrates different computational tools for predicting protein subcellular localization. BUSCA combines methods for identifying signal and transit peptides (DeepSig and TPpred3), GPI-anchors (PredGPI) and transmembrane domains (ENSEMBLE3.0 and BetAware) with tools for discriminating subcellular localization of both globular and membrane proteins (BaCelLo, MemLoci and SChloro). Outcomes from the different tools are processed and integrated for annotating subcellular localization of both eukaryotic and bacterial protein sequences. We benchmark BUSCA against protein targets derived from recent CAFA experiments and other specific data sets, reporting performance at the state-of-the-art. BUSCA scores better than all other evaluated methods on 2732 targets from CAFA2, with a F1 value equal to 0.49 and among the best methods when predicting targets from CAFA3. We propose BUSCA as an integrated and accurate resource for the annotation of protein subcellular localization.
Bankruptcy Prevention: New Effort to Reflect on Legal and Social Changes.
Kliestik, Tomas; Misankova, Maria; Valaskova, Katarina; Svabova, Lucia
2018-04-01
Every corporation has an economic and moral responsibility to its stockholders to perform well financially. However, the number of bankruptcies in Slovakia has been growing for several years without an apparent macroeconomic cause. To prevent a rapid denigration and to prevent the outflow of foreign capital, various efforts are being zealously implemented. Robust analysis using conventional bankruptcy prediction tools revealed that the existing models are adaptable to local conditions, particularly local legislation. Furthermore, it was confirmed that most of these outdated tools have sufficient capability to warn of impending financial problems several years in advance. A novel bankruptcy prediction tool that outperforms the conventional models was developed. However, it is increasingly challenging to predict bankruptcy risk as corporations have become more global and more complex and as they have developed sophisticated schemes to hide their actual situations under the guise of "optimization" for tax authorities. Nevertheless, scepticism remains because economic engineers have established bankruptcy as a strategy to limit the liability resulting from court-imposed penalties.
Ouyang, Qin; Chen, Quansheng; Zhao, Jiewen
2016-02-05
The approach presented herein reports the application of near infrared (NIR) spectroscopy, in contrast with human sensory panel, as a tool for estimating Chinese rice wine quality; concretely, to achieve the prediction of the overall sensory scores assigned by the trained sensory panel. Back propagation artificial neural network (BPANN) combined with adaptive boosting (AdaBoost) algorithm, namely BP-AdaBoost, as a novel nonlinear algorithm, was proposed in modeling. First, the optimal spectra intervals were selected by synergy interval partial least square (Si-PLS). Then, BP-AdaBoost model based on the optimal spectra intervals was established, called Si-BP-AdaBoost model. These models were optimized by cross validation, and the performance of each final model was evaluated according to correlation coefficient (Rp) and root mean square error of prediction (RMSEP) in prediction set. Si-BP-AdaBoost showed excellent performance in comparison with other models. The best Si-BP-AdaBoost model was achieved with Rp=0.9180 and RMSEP=2.23 in the prediction set. It was concluded that NIR spectroscopy combined with Si-BP-AdaBoost was an appropriate method for the prediction of the sensory quality in Chinese rice wine. Copyright © 2015 Elsevier B.V. All rights reserved.
Human performance cognitive-behavioral modeling: a benefit for occupational safety.
Gore, Brian F
2002-01-01
Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.
Human performance cognitive-behavioral modeling: a benefit for occupational safety
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2002-01-01
Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.
ERIC Educational Resources Information Center
Lau, Wilfred W. F.; Yuen, Allan H. K.
2009-01-01
Recent years have seen a shift in focus from assessment of learning to assessment for learning and the emergence of alternative assessment methods. However, the reliability and validity of these methods as assessment tools are still questionable. In this article, we investigated the predictive validity of measures of the Pathfinder Scaling…
L.R. Iverson; A.M. Prasad; A. Liaw
2004-01-01
More and better machine learning tools are becoming available for landscape ecologists to aid in understanding species-environment relationships and to map probable species occurrence now and potentially into the future. To thal end, we evaluated three statistical models: Regression Tree Analybib (RTA), Bagging Trees (BT) and Random Forest (RF) for their utility in...
Validation of Tendril TrueHome Using Software-to-Software Comparison
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maguire, Jeffrey B; Horowitz, Scott G; Moore, Nathan
This study performed comparative evaluation of EnergyPlus version 8.6 and Tendril TrueHome, two physics-based home energy simulation models, to identify differences in energy consumption predictions between the two programs and resolve discrepancies between them. EnergyPlus is considered a benchmark, best-in-class software tool for building energy simulation. This exercise sought to improve both software tools through additional evaluation/scrutiny.
Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science.
Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel
2016-01-01
One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets.
Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science
Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel
2016-01-01
One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets. PMID:27532883
Applied Meteorology Unit (AMU)
NASA Technical Reports Server (NTRS)
Bauman, William; Crawford, Winifred; Barrett, Joe; Watson, Leela; Wheeler, Mark
2010-01-01
This report summarizes the Applied Meteorology Unit (AMU) activities for the first quarter of Fiscal Year 2010 (October - December 2009). A detailed project schedule is included in the Appendix. Included tasks are: (1) Peak Wind Tool for User Launch Commit Criteria (LCC), (2) Objective Lightning Probability Tool, Phase III, (3) Peak Wind Tool for General Forecasting, Phase II, (4) Upgrade Summer Severe Weather Tool in Meteorological Interactive Data Display System (MIDDS), (5) Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS) Update and Maintainability, (5) Verify 12-km resolution North American Model (MesoNAM) Performance, and (5) Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) Graphical User Interface.
Ball Bearing Analysis with the ORBIS Tool
NASA Technical Reports Server (NTRS)
Halpin, Jacob D.
2016-01-01
Ball bearing design is critical to the success of aerospace mechanisms. Key bearing performance parameters, such as load capability, stiffness, torque, and life all depend on accurate determination of the internal load distribution. Hence, a good analytical bearing tool that provides both comprehensive capabilities and reliable results becomes a significant asset to the engineer. This paper introduces the ORBIS bearing tool. A discussion of key modeling assumptions and a technical overview is provided. Numerous validation studies and case studies using the ORBIS tool are presented. All results suggest the ORBIS code closely correlates to predictions on bearing internal load distributions, stiffness, deflection and stresses.
Integrated Aero-Propulsion CFD Methodology for the Hyper-X Flight Experiment
NASA Technical Reports Server (NTRS)
Cockrell, Charles E., Jr.; Engelund, Walter C.; Bittner, Robert D.; Dilley, Arthur D.; Jentink, Tom N.; Frendi, Abdelkader
2000-01-01
Computational fluid dynamics (CFD) tools have been used extensively in the analysis and development of the X-43A Hyper-X Research Vehicle (HXRV). A significant element of this analysis is the prediction of integrated vehicle aero-propulsive performance, which includes an integration of aerodynamic and propulsion flow fields. This paper describes analysis tools used and the methodology for obtaining pre-flight predictions of longitudinal performance increments. The use of higher-fidelity methods to examine flow-field characteristics and scramjet flowpath component performance is also discussed. Limited comparisons with available ground test data are shown to illustrate the approach used to calibrate methods and assess solution accuracy. Inviscid calculations to evaluate lateral-directional stability characteristics are discussed. The methodology behind 3D tip-to-tail calculations is described and the impact of 3D exhaust plume expansion in the afterbody region is illustrated. Finally, future technology development needs in the area of hypersonic propulsion-airframe integration analysis are discussed.
Auralization of Hybrid Wing Body Aircraft Flyover Noise from System Noise Predictions
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Aumann, Aric R.; Lopes, Leonvard V.; Burley, Casey L.
2013-01-01
System noise assessments of a state-of-the-art reference aircraft (similar to a Boeing 777-200ER with GE90-like turbofan engines) and several hybrid wing body (HWB) aircraft configurations were recently performed using NASA engine and aircraft system analysis tools. The HWB aircraft were sized to an equivalent mission as the reference aircraft and assessments were performed using measurements of airframe shielding from a series of propulsion airframe aeroacoustic experiments. The focus of this work is to auralize flyover noise from the reference aircraft and the best HWB configuration using source noise predictions and shielding data based largely on the earlier assessments. For each aircraft, three flyover conditions are auralized. These correspond to approach, sideline, and cutback operating states, but flown in straight and level flight trajectories. The auralizations are performed using synthesis and simulation tools developed at NASA. Audio and visual presentations are provided to allow the reader to experience the flyover from the perspective of a listener in the simulated environment.
Lancaster, Timothy S; Schill, Matthew R; Greenberg, Jason W; Ruaengsri, Chawannuch; Schuessler, Richard B; Lawton, Jennifer S; Maniar, Hersh S; Pasque, Michael K; Moon, Marc R; Damiano, Ralph J; Melby, Spencer J
2018-05-01
The recently developed American College of Cardiology Foundation-Society of Thoracic Surgeons (STS) Collaboration on the Comparative Effectiveness of Revascularization Strategy (ASCERT) Long-Term Survival Probability Calculator is a valuable addition to existing short-term risk-prediction tools for cardiac surgical procedures but has yet to be externally validated. Institutional data of 654 patients aged 65 years or older undergoing isolated coronary artery bypass grafting between 2005 and 2010 were reviewed. Predicted survival probabilities were calculated using the ASCERT model. Survival data were collected using the Social Security Death Index and institutional medical records. Model calibration and discrimination were assessed for the overall sample and for risk-stratified subgroups based on (1) ASCERT 7-year survival probability and (2) the predicted risk of mortality (PROM) from the STS Short-Term Risk Calculator. Logistic regression analysis was performed to evaluate additional perioperative variables contributing to death. Overall survival was 92.1% (569 of 597) at 1 year and 50.5% (164 of 325) at 7 years. Calibration assessment found no significant differences between predicted and actual survival curves for the overall sample or for the risk-stratified subgroups, whether stratified by predicted 7-year survival or by PROM. Discriminative performance was comparable between the ASCERT and PROM models for 7-year survival prediction (p < 0.001 for both; C-statistic = 0.815 for ASCERT and 0.781 for PROM). Prolonged ventilation, stroke, and hospital length of stay were also predictive of long-term death. The ASCERT survival probability calculator was externally validated for prediction of long-term survival after coronary artery bypass grafting in all risk groups. The widely used STS PROM performed comparably as a predictor of long-term survival. Both tools provide important information for preoperative decision making and patient counseling about potential outcomes after coronary artery bypass grafting. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Ghosn, Marwan; Ibrahim, Tony; El Rassy, Elie; Nassani, Najib; Ghanem, Sassine; Assi, Tarek
2017-03-01
Comprehensive geriatric assessment (CGA) is a complex and interdisciplinary approach to evaluate the health status of elderly patients. The Karnofsky Performance Scale (KPS) and Physical Performance Test (PPT) are less time-consuming tools that measure functional status. This study was designed to assess and compare abridged geriatric assessment (GA), KPS and PPT as predictive tools of mortality in elderly patients with cancer. This prospective interventional study included all individuals aged >70years who were diagnosed with cancer during the study period. Subjects were interviewed directly using a procedure that included a clinical test and a questionnaire composed of the KPS, PPT and abridged GCA. Overall survival (OS) was the primary endpoint. The log rank test was used to compare survival curves, and Cox's regression model (forward procedure) was used for multivariate survival analysis. One hundred patients were included in this study. Abridged GA was the only tool found to predict mortality [median OS for unfit patients (at least two impairments) 467days vs 1030days for fit patients; p=0.04]. Patients defined as fit by mean PPT score (>20) had worse median OS (560 vs 721days); however, this difference was not significant (p=0.488 on log rank). Although median OS did not differ significantly between patients with low (≤80) and high (>80) KPS scores (467 and 795days, respectively; p=0.09), survival curves diverged after nearly 120days of follow-up. Visual and hearing impairments were the only components of abridged GA of prognostic value. Neither KPS nor PPT were shown to predict mortality in elderly patients with cancer whereas abridged GA was predictive. This study suggests a possible role for visual and hearing assessment as screening for patients requiring CGA. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Isvoran, Adriana
2016-03-01
Assessment of the effects of the herbicides nicosulfuron and chlorsulfuron and the fungicides difenoconazole and drazoxlone upon catalase produced by soil microorganism Proteus mirabilis is performed using the molecular docking technique. The interactions of pesticides with the enzymes are predicted using SwissDock and PatchDock docking tools. There are correlations for predicted binding energy values for enzyme-pesticide complexes obtained using the two docking tools, all the considered pesticides revealing favorable binding to the enzyme, but only the herbicides bind to the catalytic site. These results suggest the inhibitory potential of chlorsulfuron and nicosulfuron on the catalase activity in soil.
Carr, Sandra E; Celenza, Antonio; Puddey, Ian B; Lake, Fiona
2014-07-30
Little recent published evidence explores the relationship between academic performance in medical school and performance as a junior doctor. Although many forms of assessment are used to demonstrate a medical student's knowledge or competence, these measures may not reliably predict performance in clinical practice following graduation. This descriptive cohort study explores the relationship between academic performance of medical students and workplace performance as junior doctors, including the influence of age, gender, ethnicity, clinical attachment, assessment type and summary score measures (grade point average) on performance in the workplace as measured by the Junior Doctor Assessment Tool. There were two hundred participants. There were significant correlations between performance as a Junior Doctor (combined overall score) and the grade point average (r = 0.229, P = 0.002), the score from the Year 6 Emergency Medicine attachment (r = 0.361, P < 0.001) and the Written Examination in Year 6 (r = 0.178, P = 0.014). There was no significant effect of any individual method of assessment in medical school, gender or ethnicity on the overall combined score of performance of the junior doctor. Performance on integrated assessments from medical school is correlated to performance as a practicing physician as measured by the Junior Doctor Assessment Tool. These findings support the value of combining undergraduate assessment scores to assess competence and predict future performance.
Predicting trauma patient mortality: ICD [or ICD-10-AM] versus AIS based approaches.
Willis, Cameron D; Gabbe, Belinda J; Jolley, Damien; Harrison, James E; Cameron, Peter A
2010-11-01
The International Classification of Diseases Injury Severity Score (ICISS) has been proposed as an International Classification of Diseases (ICD)-10-based alternative to mortality prediction tools that use Abbreviated Injury Scale (AIS) data, including the Trauma and Injury Severity Score (TRISS). To date, studies have not examined the performance of ICISS using Australian trauma registry data. This study aimed to compare the performance of ICISS with other mortality prediction tools in an Australian trauma registry. This was a retrospective review of prospectively collected data from the Victorian State Trauma Registry. A training dataset was created for model development and a validation dataset for evaluation. The multiplicative ICISS model was compared with a worst injury ICISS approach, Victorian TRISS (V-TRISS, using local coefficients), maximum AIS severity and a multivariable model including ICD-10-AM codes as predictors. Models were investigated for discrimination (C-statistic) and calibration (Hosmer-Lemeshow statistic). The multivariable approach had the highest level of discrimination (C-statistic 0.90) and calibration (H-L 7.65, P= 0.468). Worst injury ICISS, V-TRISS and maximum AIS had similar performance. The multiplicative ICISS produced the lowest level of discrimination (C-statistic 0.80) and poorest calibration (H-L 50.23, P < 0.001). The performance of ICISS may be affected by the data used to develop estimates, the ICD version employed, the methods for deriving estimates and the inclusion of covariates. In this analysis, a multivariable approach using ICD-10-AM codes was the best-performing method. A multivariable ICISS approach may therefore be a useful alternative to AIS-based methods and may have comparable predictive performance to locally derived TRISS models. © 2010 The Authors. ANZ Journal of Surgery © 2010 Royal Australasian College of Surgeons.
USDA-ARS?s Scientific Manuscript database
In conventional and most IPM programs, application of insecticides continues to be the most important responsive pest control tactic. For both immediate and long-term optimization and sustainability of insecticide applications, it is paramount to study the factors affecting the performance of insect...
USDA-ARS?s Scientific Manuscript database
Body condition score is used as a management tool to predict competency of reproduction in beef cows. Therefore, a retrospective study was performed to evaluate association of BCS at calving with subsequent pregnancy rate, days to first estrus, nutrient status (assessed by blood metabolites), and c...
Big Data Toolsets to Pharmacometrics: Application of Machine Learning for Time-to-Event Analysis.
Gong, Xiajing; Hu, Meng; Zhao, Liang
2018-05-01
Additional value can be potentially created by applying big data tools to address pharmacometric problems. The performances of machine learning (ML) methods and the Cox regression model were evaluated based on simulated time-to-event data synthesized under various preset scenarios, i.e., with linear vs. nonlinear and dependent vs. independent predictors in the proportional hazard function, or with high-dimensional data featured by a large number of predictor variables. Our results showed that ML-based methods outperformed the Cox model in prediction performance as assessed by concordance index and in identifying the preset influential variables for high-dimensional data. The prediction performances of ML-based methods are also less sensitive to data size and censoring rates than the Cox regression model. In conclusion, ML-based methods provide a powerful tool for time-to-event analysis, with a built-in capacity for high-dimensional data and better performance when the predictor variables assume nonlinear relationships in the hazard function. © 2018 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Boundary Layer Transition Results From STS-114
NASA Technical Reports Server (NTRS)
Berry, Scott A.; Horvath, Thomas J.; Cassady, Amy M.; Kirk, Benjamin S.; Wang, K. C.; Hyatt, Andrew J.
2006-01-01
The tool for predicting the onset of boundary layer transition from damage to and/or repair of the thermal protection system developed in support of Shuttle Return to Flight is compared to the STS-114 flight results. The Boundary Layer Transition (BLT) Tool is part of a suite of tools that analyze the aerothermodynamic environment of the local thermal protection system to allow informed disposition of damage for making recommendations to fly as is or to repair. Using mission specific trajectory information and details of each damage site or repair, the expected time of transition onset is predicted to help determine the proper aerothermodynamic environment to use in the subsequent thermal and stress analysis of the local structure. The boundary layer transition criteria utilized for the tool was developed from ground-based measurements to account for the effect of both protuberances and cavities and has been calibrated against flight data. Computed local boundary layer edge conditions provided the means to correlate the experimental results and then to extrapolate to flight. During STS-114, the BLT Tool was utilized and was part of the decision making process to perform an extravehicular activity to remove the large gap fillers. The role of the BLT Tool during this mission, along with the supporting information that was acquired for the on-orbit analysis, is reviewed. Once the large gap fillers were removed, all remaining damage sites were cleared for reentry as is. Post-flight analysis of the transition onset time revealed excellent agreement with BLT Tool predictions.
Analytical Tools for Space Suit Design
NASA Technical Reports Server (NTRS)
Aitchison, Lindsay
2011-01-01
As indicated by the implementation of multiple small project teams within the agency, NASA is adopting a lean approach to hardware development that emphasizes quick product realization and rapid response to shifting program and agency goals. Over the past two decades, space suit design has been evolutionary in approach with emphasis on building prototypes then testing with the largest practical range of subjects possible. The results of these efforts show continuous improvement but make scaled design and performance predictions almost impossible with limited budgets and little time. Thus, in an effort to start changing the way NASA approaches space suit design and analysis, the Advanced Space Suit group has initiated the development of an integrated design and analysis tool. It is a multi-year-if not decadal-development effort that, when fully implemented, is envisioned to generate analysis of any given space suit architecture or, conversely, predictions of ideal space suit architectures given specific mission parameters. The master tool will exchange information to and from a set of five sub-tool groups in order to generate the desired output. The basic functions of each sub-tool group, the initial relationships between the sub-tools, and a comparison to state of the art software and tools are discussed.
Status of Technology Development to enable Large Stable UVOIR Space Telescopes
NASA Astrophysics Data System (ADS)
Stahl, H. Philip; MSFC AMTD Team
2017-01-01
NASA MSFC has two funded Strategic Astrophysics Technology projects to develop technology for potential future large missions: AMTD and PTC. The Advanced Mirror Technology Development (AMTD) project is developing technology to make mechanically stable mirrors for a 4-meter or larger UVOIR space telescope. AMTD is demonstrating this technology by making a 1.5 meter diameter x 200 mm thick ULE(C) mirror that is 1/3rd scale of a full size 4-m mirror. AMTD is characterizing the mechanical and thermal performance of this mirror and of a 1.2-meter Zerodur(R) mirror to validate integrate modeling tools. Additionally, AMTD has developed integrated modeling tools which are being used to evaluate primary mirror systems for a potential Habitable Exoplanet Mission and analyzed the interaction between optical telescope wavefront stability and coronagraph contrast leakage. Predictive Thermal Control (PTC) project is developing technology to enable high stability thermal wavefront performance by using integrated modeling tools to predict and actively control the thermal environment of a 4-m or larger UVOIR space telescope.
Comparative assessment of methods for the fusion transcripts detection from RNA-Seq data
Kumar, Shailesh; Vo, Angie Duy; Qin, Fujun; Li, Hui
2016-01-01
RNA-Seq made possible the global identification of fusion transcripts, i.e. “chimeric RNAs”. Even though various software packages have been developed to serve this purpose, they behave differently in different datasets provided by different developers. It is important for both users, and developers to have an unbiased assessment of the performance of existing fusion detection tools. Toward this goal, we compared the performance of 12 well-known fusion detection software packages. We evaluated the sensitivity, false discovery rate, computing time, and memory usage of these tools in four different datasets (positive, negative, mixed, and test). We conclude that some tools are better than others in terms of sensitivity, positive prediction value, time consumption and memory usage. We also observed small overlaps of the fusions detected by different tools in the real dataset (test dataset). This could be due to false discoveries by various tools, but could also be due to the reason that none of the tools are inclusive. We have found that the performance of the tools depends on the quality, read length, and number of reads of the RNA-Seq data. We recommend that users choose the proper tools for their purpose based on the properties of their RNA-Seq data. PMID:26862001
The Tracking Meteogram, an AWIPS II Tool for Time-Series Analysis
NASA Technical Reports Server (NTRS)
Burks, Jason Eric; Sperow, Ken
2015-01-01
A new tool has been developed for the National Weather Service (NWS) Advanced Weather Interactive Processing System (AWIPS) II through collaboration between NASA's Short-term Prediction Research and Transition (SPoRT) and the NWS Meteorological Development Laboratory (MDL). Referred to as the "Tracking Meteogram", the tool aids NWS forecasters in assessing meteorological parameters associated with moving phenomena. The tool aids forecasters in severe weather situations by providing valuable satellite and radar derived trends such as cloud top cooling rates, radial velocity couplets, reflectivity, and information from ground-based lightning networks. The Tracking Meteogram tool also aids in synoptic and mesoscale analysis by tracking parameters such as the deepening of surface low pressure systems, changes in surface or upper air temperature, and other properties. The tool provides a valuable new functionality and demonstrates the flexibility and extensibility of the NWS AWIPS II architecture. In 2014, the operational impact of the tool was formally evaluated through participation in the NOAA/NWS Operations Proving Ground (OPG), a risk reduction activity to assess performance and operational impact of new forecasting concepts, tools, and applications. Performance of the Tracking Meteogram Tool during the OPG assessment confirmed that it will be a valuable asset to the operational forecasters. This presentation reviews development of the Tracking Meteogram tool, performance and feedback acquired during the OPG activity, and future goals for continued support and extension to other application areas.
Goenka, Anu; Jeena, Prakash M; Mlisana, Koleka; Solomon, Tom; Spicer, Kevin; Stephenson, Rebecca; Verma, Arpana; Dhada, Barnesh; Griffiths, Michael J
2018-03-01
Early diagnosis of tuberculous meningitis (TBM) is crucial to achieve optimum outcomes. There is no effective rapid diagnostic test for use in children. We aimed to develop a clinical decision tool to facilitate the early diagnosis of childhood TBM. Retrospective case-control study was performed across 7 hospitals in KwaZulu-Natal, South Africa (2010-2014). We identified the variables most predictive of microbiologically confirmed TBM in children (3 months to 15 years) by univariate analysis. These variables were modelled into a clinical decision tool and performance tested on an independent sample group. Of 865 children with suspected TBM, 3% (25) were identified with microbiologically confirmed TBM. Clinical information was retrieved for 22 microbiologically confirmed cases of TBM and compared with 66 controls matched for age, ethnicity, sex and geographical origin. The 9 most predictive variables among the confirmed cases were used to develop a clinical decision tool (CHILD TB LP): altered Consciousness; caregiver HIV infected; Illness length >7 days; Lethargy; focal neurologic Deficit; failure to Thrive; Blood/serum sodium <132 mmol/L; CSF >10 Lymphocytes ×10/L; CSF Protein >0.65 g/L. This tool successfully classified an independent sample of 7 cases and 21 controls with a sensitivity of 100% and specificity of 90%. The CHILD TB LP decision tool accurately classified microbiologically confirmed TBM. We propose that CHILD TB LP is prospectively evaluated as a novel rapid diagnostic tool for use in the initial evaluation of children with suspected neurologic infection presenting to hospitals in similar settings.
Towards early software reliability prediction for computer forensic tools (case study).
Abu Talib, Manar
2016-01-01
Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leung, Elo; Huang, Amy; Cadag, Eithon
In this study, we introduce the Protein Sequence Annotation Tool (PSAT), a web-based, sequence annotation meta-server for performing integrated, high-throughput, genome-wide sequence analyses. Our goals in building PSAT were to (1) create an extensible platform for integration of multiple sequence-based bioinformatics tools, (2) enable functional annotations and enzyme predictions over large input protein fasta data sets, and (3) provide a web interface for convenient execution of the tools. In this paper, we demonstrate the utility of PSAT by annotating the predicted peptide gene products of Herbaspirillum sp. strain RV1423, importing the results of PSAT into EC2KEGG, and using the resultingmore » functional comparisons to identify a putative catabolic pathway, thereby distinguishing RV1423 from a well annotated Herbaspirillum species. This analysis demonstrates that high-throughput enzyme predictions, provided by PSAT processing, can be used to identify metabolic potential in an otherwise poorly annotated genome. Lastly, PSAT is a meta server that combines the results from several sequence-based annotation and function prediction codes, and is available at http://psat.llnl.gov/psat/. PSAT stands apart from other sequencebased genome annotation systems in providing a high-throughput platform for rapid de novo enzyme predictions and sequence annotations over large input protein sequence data sets in FASTA. PSAT is most appropriately applied in annotation of large protein FASTA sets that may or may not be associated with a single genome.« less
Leung, Elo; Huang, Amy; Cadag, Eithon; ...
2016-01-20
In this study, we introduce the Protein Sequence Annotation Tool (PSAT), a web-based, sequence annotation meta-server for performing integrated, high-throughput, genome-wide sequence analyses. Our goals in building PSAT were to (1) create an extensible platform for integration of multiple sequence-based bioinformatics tools, (2) enable functional annotations and enzyme predictions over large input protein fasta data sets, and (3) provide a web interface for convenient execution of the tools. In this paper, we demonstrate the utility of PSAT by annotating the predicted peptide gene products of Herbaspirillum sp. strain RV1423, importing the results of PSAT into EC2KEGG, and using the resultingmore » functional comparisons to identify a putative catabolic pathway, thereby distinguishing RV1423 from a well annotated Herbaspirillum species. This analysis demonstrates that high-throughput enzyme predictions, provided by PSAT processing, can be used to identify metabolic potential in an otherwise poorly annotated genome. Lastly, PSAT is a meta server that combines the results from several sequence-based annotation and function prediction codes, and is available at http://psat.llnl.gov/psat/. PSAT stands apart from other sequencebased genome annotation systems in providing a high-throughput platform for rapid de novo enzyme predictions and sequence annotations over large input protein sequence data sets in FASTA. PSAT is most appropriately applied in annotation of large protein FASTA sets that may or may not be associated with a single genome.« less
Development and in-flight performance of the Mariner 9 spacecraft propulsion system
NASA Technical Reports Server (NTRS)
Evans, D. D.; Cannova, R. D.; Cork, M. J.
1973-01-01
On November 14, 1971, Mariner 9 was decelerated into orbit about Mars by a 1334 N (300 lbf) liquid bipropellant propulsion system. This paper describes and summarizes the development and in-flight performance of this pressure-fed, nitrogen tetroxide/monomethyl hydrazine bipropellant system. The design of all Mariner propulsion subsystems has been predicted upon the premise that simplicity of approach, coupled with thorough qualification and margin-limits testing, is the key to cost-effective reliability. The qualification test program and analytical modeling are also discussed. Since the propulsion subsystem is modular in nature, it was completely checked, serviced, and tested independent of the spacecraft. Proper prediction of in-flight performance required the development of three significant modeling tools to predict and account for nitrogen saturation of the propellant during the six-month coast period and to predict and statistically analyze in-flight data.
Prediction of wastewater treatment plants performance based on artificial fish school neural network
NASA Astrophysics Data System (ADS)
Zhang, Ruicheng; Li, Chong
2011-10-01
A reliable model for wastewater treatment plant is essential in providing a tool for predicting its performance and to form a basis for controlling the operation of the process. This would minimize the operation costs and assess the stability of environmental balance. For the multi-variable, uncertainty, non-linear characteristics of the wastewater treatment system, an artificial fish school neural network prediction model is established standing on actual operation data in the wastewater treatment system. The model overcomes several disadvantages of the conventional BP neural network. The results of model calculation show that the predicted value can better match measured value, played an effect on simulating and predicting and be able to optimize the operation status. The establishment of the predicting model provides a simple and practical way for the operation and management in wastewater treatment plant, and has good research and engineering practical value.
Assessing the Integration of Audience Response System Technology in Teaching of Anatomical Sciences
Alexander, Cara J.; Crescini, Weronika M.; Juskewitch, Justin E.; Lachman, Nirusha; Pawlina, Wojciech
2009-01-01
The goals of our study were to determine the predictive value and usability of an audience response system (ARS) as a knowledge assessment tool in an undergraduate medical curriculum. Over a three year period (2006–2008), data were collected from first year didactic blocks in Genetics/Histology and Anatomy/Radiology (n=42–50 per class). During each block, students answered clinically oriented multiple choice questions using the ARS. Students’ performances were recorded and cumulative ARS scores were compared with final examination performances. Correlation coefficients between these variables were calculated to assess the existence and direction of an association between ARS and final examination score. If associations existed, univariate models were then constructed using ARS as a predictor of final examination score. Student and faculty perception of ARS difficulty, usefulness, effect on performance, and preferred use were evaluated using a questionnaire. There was a statistically significant positive correlation between ARS and final examination scores in all didactic blocks and predictive univariate models were constructed for each relationship (all P < 0.0001). Students and faculty agreed that ARS was easy to use and are liable tool for providing real-time feedback that improved their performance and participation. In conclusion, we found ARS to be an effective assessment tool benefiting the faculty and the students in a curriculum focused on interaction and self-directed learning. PMID:19670428
Assessing the integration of audience response system technology in teaching of anatomical sciences.
Alexander, Cara J; Crescini, Weronika M; Juskewitch, Justin E; Lachman, Nirusha; Pawlina, Wojciech
2009-01-01
The goals of our study were to determine the predictive value and usability of an audience response system (ARS) as a knowledge assessment tool in an undergraduate medical curriculum. Over a three year period (2006-2008), data were collected from first year didactic blocks in Genetics/Histology and Anatomy/Radiology (n = 42-50 per class). During each block, students answered clinically oriented multiple choice questions using the ARS. Students' performances were recorded and cumulative ARS scores were compared with final examination performances. Correlation coefficients between these variables were calculated to assess the existence and direction of an association between ARS and final examination score. If associations existed, univariate models were then constructed using ARS as a predictor of final examination score. Student and faculty perception of ARS difficulty, usefulness, effect on performance, and preferred use were evaluated using a questionnaire. There was a statistically significant positive correlation between ARS and final examination scores in all didactic blocks and predictive univariate models were constructed for each relationship (all P < 0.0001). Students and faculty agreed that ARS was easy to use and a reliable tool for providing real-time feedback that improved their performance and participation. In conclusion, we found ARS to be an effective assessment tool benefiting the faculty and the students in a curriculum focused on interaction and self-directed learning. 2009 American Association of Anatomists
Meiners, Kelly M; Rush, Douglas K
2017-01-01
Prior studies have explored variables that had predictive relationships with National Physical Therapy Examination (NPTE) score or NPTE failure. The purpose of this study was to explore whether certain variables were predictive of test-takers' first-time score on the NPTE. The population consisted of 134 students who graduated from the university's Professional DPT Program in 2012 to 2014. This quantitative study used a retrospective design. Two separate data analyses were conducted. First, hierarchical linear multiple regression (HMR) analysis was performed to determine which variables were predictive of first-time NPTE score. Second, a correlation analysis was performed on all 18 Physical Therapy Clinical Performance Instrument (PT CPI) 2006 category scores obtained during the first long-term clinical rotation, overall PT CPI 2006 score, and NPTE passage. With all variables entered, the HMR model predicted 39% of the variance seen in NPTE scores. The HMR results showed that physical therapy program first-year GPA (1PTGPA) was the strongest predictor and explained 24% of the variance in NPTE scores (b=0.572, p<0.001). The correlational analysis found no statistically significant correlation between the 18 PT CPI 2006 category scores, overall PT CPI 2006 score, and NPTE passage. As 1PTGPA had the most significant contribution to prediction of NPTE scores, programs need to monitor first-year students who display academic difficulty. PT CPI version 2006 scores were significantly correlated with each other, but not with NPTE score or NPTE passage. Both tools measure many of the same professional requirements but use different modes of assessment, and they may be considered complementary tools to gain a full picture of both the student's ability and skills.
A Modeling Tool for Household Biogas Burner Flame Port Design
NASA Astrophysics Data System (ADS)
Decker, Thomas J.
Anaerobic digestion is a well-known and potentially beneficial process for rural communities in emerging markets, providing the opportunity to generate usable gaseous fuel from agricultural waste. With recent developments in low-cost digestion technology, communities across the world are gaining affordable access to the benefits of anaerobic digestion derived biogas. For example, biogas can displace conventional cooking fuels such as biomass (wood, charcoal, dung) and Liquefied Petroleum Gas (LPG), effectively reducing harmful emissions and fuel cost respectively. To support the ongoing scaling effort of biogas in rural communities, this study has developed and tested a design tool aimed at optimizing flame port geometry for household biogas-fired burners. The tool consists of a multi-component simulation that incorporates three-dimensional CAD designs with simulated chemical kinetics and computational fluid dynamics. An array of circular and rectangular port designs was developed for a widely available biogas stove (called the Lotus) as part of this study. These port designs were created through guidance from previous studies found in the literature. The three highest performing designs identified by the tool were manufactured and tested experimentally to validate tool output and to compare against the original port geometry. The experimental results aligned with the tool's prediction for the three chosen designs. Each design demonstrated improved thermal efficiency relative to the original, with one configuration of circular ports exhibiting superior performance. The results of the study indicated that designing for a targeted range of port hydraulic diameter, velocity and mixture density in the tool is a relevant way to improve the thermal efficiency of a biogas burner. Conversely, the emissions predictions made by the tool were found to be unreliable and incongruent with laboratory experiments.
Thermal cracking performance prediction and asset management integration.
DOT National Transportation Integrated Search
2011-03-01
With shrinking maintenance budgets and the need to do more with less, accurate, robust asset management tools are greatly needed for the transportation engineering community. In addition, the increased use of recycled materials and low energy p...
Integrated modeling tool for performance engineering of complex computer systems
NASA Technical Reports Server (NTRS)
Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar
1989-01-01
This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.
Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2001-01-01
A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.
NASA Astrophysics Data System (ADS)
Salmaso, Veronica; Sturlese, Mattia; Cuzzolin, Alberto; Moro, Stefano
2018-01-01
Molecular docking is a powerful tool in the field of computer-aided molecular design. In particular, it is the technique of choice for the prediction of a ligand pose within its target binding site. A multitude of docking methods is available nowadays, whose performance may vary depending on the data set. Therefore, some non-trivial choices should be made before starting a docking simulation. In the same framework, the selection of the target structure to use could be challenging, since the number of available experimental structures is increasing. Both issues have been explored within this work. The pose prediction of a pool of 36 compounds provided by D3R Grand Challenge 2 organizers was preceded by a pipeline to choose the best protein/docking-method couple for each blind ligand. An integrated benchmark approach including ligand shape comparison and cross-docking evaluations was implemented inside our DockBench software. The results are encouraging and show that bringing attention to the choice of the docking simulation fundamental components improves the results of the binding mode predictions.
OISI dynamic end-to-end modeling tool
NASA Astrophysics Data System (ADS)
Kersten, Michael; Weidler, Alexander; Wilhelm, Rainer; Johann, Ulrich A.; Szerdahelyi, Laszlo
2000-07-01
The OISI Dynamic end-to-end modeling tool is tailored to end-to-end modeling and dynamic simulation of Earth- and space-based actively controlled optical instruments such as e.g. optical stellar interferometers. `End-to-end modeling' is meant to denote the feature that the overall model comprises besides optical sub-models also structural, sensor, actuator, controller and disturbance sub-models influencing the optical transmission, so that the system- level instrument performance due to disturbances and active optics can be simulated. This tool has been developed to support performance analysis and prediction as well as control loop design and fine-tuning for OISI, Germany's preparatory program for optical/infrared spaceborne interferometry initiated in 1994 by Dornier Satellitensysteme GmbH in Friedrichshafen.
Computational prediction of type III and IV secreted effectors in Gram-negative bacteria
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDermott, Jason E.; Corrigan, Abigail L.; Peterson, Elena S.
2011-01-01
In this review, we provide an overview of the methods employed by four recent papers that described novel methods for computational prediction of secreted effectors from type III and IV secretion systems in Gram-negative bacteria. The results of the studies in terms of performance at accurately predicting secreted effectors and similarities found between secretion signals that may reflect biologically relevant features for recognition. We discuss the web-based tools for secreted effector prediction described in these studies and announce the availability of our tool, the SIEVEserver (http://www.biopilot.org). Finally, we assess the accuracy of the three type III effector prediction methods onmore » a small set of proteins not known prior to the development of these tools that we have recently discovered and validated using both experimental and computational approaches. Our comparison shows that all methods use similar approaches and, in general arrive at similar conclusions. We discuss the possibility of an order-dependent motif in the secretion signal, which was a point of disagreement in the studies. Our results show that there may be classes of effectors in which the signal has a loosely defined motif, and others in which secretion is dependent only on compositional biases. Computational prediction of secreted effectors from protein sequences represents an important step toward better understanding the interaction between pathogens and hosts.« less
What Matters from Admissions? Identifying Success and Risk Among Canadian Dental Students.
Plouffe, Rachel A; Hammond, Robert; Goldberg, Harvey A; Chahine, Saad
2018-05-01
The aims of this study were to determine whether different student profiles would emerge in terms of high and low GPA performance in each year of dental school and to investigate the utility of preadmissions variables in predicting performance and performance stability throughout each year of dental school. Data from 11 graduating cohorts (2004-14) at the Schulich School of Medicine & Dentistry, University of Western Ontario, Canada, were collected and analyzed using bivariate correlations, latent profile analysis, and hierarchical generalized linear models (HGLMs). The data analyzed were for 616 students in total (332 males and 284 females). Four models were developed to predict adequate and poor performance throughout each of four dental school years. An additional model was developed to predict student performance stability across time. Two separate student profiles reflecting high and low GPA performance across each year of dental school were identified, and scores on cognitive preadmissions variables differentially predicted the probability of grouping into high and low performance profiles. Students with higher pre-dental GPAs and DAT chemistry were most likely to remain stable in a high-performance group across each year of dental school. Overall, the findings suggest that selection committees should consider pre-dental GPA and DAT chemistry scores as important tools for predicting dental school performance and stability across time. This research is important in determining how to better predict success and failure in various areas of preclinical dentistry courses and to provide low-performing students with adequate academic assistance.
Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations
Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; ...
2017-08-29
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less
Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less
Gaussian process regression for tool wear prediction
NASA Astrophysics Data System (ADS)
Kong, Dongdong; Chen, Yongjie; Li, Ning
2018-05-01
To realize and accelerate the pace of intelligent manufacturing, this paper presents a novel tool wear assessment technique based on the integrated radial basis function based kernel principal component analysis (KPCA_IRBF) and Gaussian process regression (GPR) for real-timely and accurately monitoring the in-process tool wear parameters (flank wear width). The KPCA_IRBF is a kind of new nonlinear dimension-increment technique and firstly proposed for feature fusion. The tool wear predictive value and the corresponding confidence interval are both provided by utilizing the GPR model. Besides, GPR performs better than artificial neural networks (ANN) and support vector machines (SVM) in prediction accuracy since the Gaussian noises can be modeled quantitatively in the GPR model. However, the existence of noises will affect the stability of the confidence interval seriously. In this work, the proposed KPCA_IRBF technique helps to remove the noises and weaken its negative effects so as to make the confidence interval compressed greatly and more smoothed, which is conducive for monitoring the tool wear accurately. Moreover, the selection of kernel parameter in KPCA_IRBF can be easily carried out in a much larger selectable region in comparison with the conventional KPCA_RBF technique, which helps to improve the efficiency of model construction. Ten sets of cutting tests are conducted to validate the effectiveness of the presented tool wear assessment technique. The experimental results show that the in-process flank wear width of tool inserts can be monitored accurately by utilizing the presented tool wear assessment technique which is robust under a variety of cutting conditions. This study lays the foundation for tool wear monitoring in real industrial settings.
TADSim: Discrete Event-based Performance Prediction for Temperature Accelerated Dynamics
Mniszewski, Susan M.; Junghans, Christoph; Voter, Arthur F.; ...
2015-04-16
Next-generation high-performance computing will require more scalable and flexible performance prediction tools to evaluate software--hardware co-design choices relevant to scientific applications and hardware architectures. Here, we present a new class of tools called application simulators—parameterized fast-running proxies of large-scale scientific applications using parallel discrete event simulation. Parameterized choices for the algorithmic method and hardware options provide a rich space for design exploration and allow us to quickly find well-performing software--hardware combinations. We demonstrate our approach with a TADSim simulator that models the temperature-accelerated dynamics (TAD) method, an algorithmically complex and parameter-rich member of the accelerated molecular dynamics (AMD) family ofmore » molecular dynamics methods. The essence of the TAD application is captured without the computational expense and resource usage of the full code. We accomplish this by identifying the time-intensive elements, quantifying algorithm steps in terms of those elements, abstracting them out, and replacing them by the passage of time. We use TADSim to quickly characterize the runtime performance and algorithmic behavior for the otherwise long-running simulation code. We extend TADSim to model algorithm extensions, such as speculative spawning of the compute-bound stages, and predict performance improvements without having to implement such a method. Validation against the actual TAD code shows close agreement for the evolution of an example physical system, a silver surface. Finally, focused parameter scans have allowed us to study algorithm parameter choices over far more scenarios than would be possible with the actual simulation. This has led to interesting performance-related insights and suggested extensions.« less
NASA Technical Reports Server (NTRS)
Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette
2009-01-01
Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.
Applied Meteorology Unit (AMU) Quarterly Report - Fourth Quarter FY-09
NASA Technical Reports Server (NTRS)
Bauman, William; Crawford, Winifred; Barrett, Joe; Watson, Leela; Wheeler, Mark
2009-01-01
This report summarizes the Applied Meteorology Unit (AMU) activities for the fourth quarter of Fiscal Year 2009 (July - September 2009). Tasks reports include: (1) Peak Wind Tool for User Launch Commit Criteria (LCC), (2) Objective Lightning Probability Tool. Phase III, (3) Peak Wind Tool for General Forecasting. Phase II, (4) Update and Maintain Advanced Regional Prediction System (ARPS) Data Analysis System (ADAS), (5) Verify MesoNAM Performance (6) develop a Graphical User Interface to update selected parameters for the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLlT)
An Object-Based Approach to Evaluation of Climate Variability Projections and Predictions
NASA Astrophysics Data System (ADS)
Ammann, C. M.; Brown, B.; Kalb, C. P.; Bullock, R.
2017-12-01
Evaluations of the performance of earth system model predictions and projections are of critical importance to enhance usefulness of these products. Such evaluations need to address specific concerns depending on the system and decisions of interest; hence, evaluation tools must be tailored to inform about specific issues. Traditional approaches that summarize grid-based comparisons of analyses and models, or between current and future climate, often do not reveal important information about the models' performance (e.g., spatial or temporal displacements; the reason behind a poor score) and are unable to accommodate these specific information needs. For example, summary statistics such as the correlation coefficient or the mean-squared error provide minimal information to developers, users, and decision makers regarding what is "right" and "wrong" with a model. New spatial and temporal-spatial object-based tools from the field of weather forecast verification (where comparisons typically focus on much finer temporal and spatial scales) have been adapted to more completely answer some of the important earth system model evaluation questions. In particular, the Method for Object-based Diagnostic Evaluation (MODE) tool and its temporal (three-dimensional) extension (MODE-TD) have been adapted for these evaluations. More specifically, these tools can be used to address spatial and temporal displacements in projections of El Nino-related precipitation and/or temperature anomalies, ITCZ-associated precipitation areas, atmospheric rivers, seasonal sea-ice extent, and other features of interest. Examples of several applications of these tools in a climate context will be presented, using output of the CESM large ensemble. In general, these tools provide diagnostic information about model performance - accounting for spatial, temporal, and intensity differences - that cannot be achieved using traditional (scalar) model comparison approaches. Thus, they can provide more meaningful information that can be used in decision-making and planning. Future extensions and applications of these tools in a climate context will be considered.
Rubin, Katrine Hass; Friis-Holmberg, Teresa; Hermann, Anne Pernille; Abrahamsen, Bo; Brixen, Kim
2013-08-01
A huge number of risk assessment tools have been developed. Far from all have been validated in external studies, more of them have absence of methodological and transparent evidence, and few are integrated in national guidelines. Therefore, we performed a systematic review to provide an overview of existing valid and reliable risk assessment tools for prediction of osteoporotic fractures. Additionally, we aimed to determine if the performance of each tool was sufficient for practical use, and last, to examine whether the complexity of the tools influenced their discriminative power. We searched PubMed, Embase, and Cochrane databases for papers and evaluated these with respect to methodological quality using the Quality Assessment Tool for Diagnostic Accuracy Studies (QUADAS) checklist. A total of 48 tools were identified; 20 had been externally validated, however, only six tools had been tested more than once in a population-based setting with acceptable methodological quality. None of the tools performed consistently better than the others and simple tools (i.e., the Osteoporosis Self-assessment Tool [OST], Osteoporosis Risk Assessment Instrument [ORAI], and Garvan Fracture Risk Calculator [Garvan]) often did as well or better than more complex tools (i.e., Simple Calculated Risk Estimation Score [SCORE], WHO Fracture Risk Assessment Tool [FRAX], and Qfracture). No studies determined the effectiveness of tools in selecting patients for therapy and thus improving fracture outcomes. High-quality studies in randomized design with population-based cohorts with different case mixes are needed. Copyright © 2013 American Society for Bone and Mineral Research.
Yu, Kun-Hsing; Fitzpatrick, Michael R; Pappas, Luke; Chan, Warren; Kung, Jessica; Snyder, Michael
2017-09-12
Precision oncology is an approach that accounts for individual differences to guide cancer management. Omics signatures have been shown to predict clinical traits for cancer patients. However, the vast amount of omics information poses an informatics challenge in systematically identifying patterns associated with health outcomes, and no general-purpose data-mining tool exists for physicians, medical researchers, and citizen scientists without significant training in programming and bioinformatics. To bridge this gap, we built the Omics AnalySIs System for PRecision Oncology (OASISPRO), a web-based system to mine the quantitative omics information from The Cancer Genome Atlas (TCGA). This system effectively visualizes patients' clinical profiles, executes machine-learning algorithms of choice on the omics data, and evaluates the prediction performance using held-out test sets. With this tool, we successfully identified genes strongly associated with tumor stage, and accurately predicted patients' survival outcomes in many cancer types, including mesothelioma and adrenocortical carcinoma. By identifying the links between omics and clinical phenotypes, this system will facilitate omics studies on precision cancer medicine and contribute to establishing personalized cancer treatment plans. This web-based tool is available at http://tinyurl.com/oasispro ;source codes are available at http://tinyurl.com/oasisproSourceCode . © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Predicting silicon pore optics
NASA Astrophysics Data System (ADS)
Vacanti, Giuseppe; Barriére, Nicolas; Bavdaz, Marcos; Chatbi, Abdelhakim; Collon, Maximilien; Dekker, Danielle; Girou, David; Günther, Ramses; van der Hoeven, Roy; Landgraf, Boris; Sforzini, Jessica; Vervest, Mark; Wille, Eric
2017-09-01
Continuing improvement of Silicon Pore Optics (SPO) calls for regular extension and validation of the tools used to model and predict their X-ray performance. In this paper we present an updated geometrical model for the SPO optics and describe how we make use of the surface metrology collected during each of the SPO manufacturing runs. The new geometrical model affords the user a finer degree of control on the mechanical details of the SPO stacks, while a standard interface has been developed to make use of any type of metrology that can return changes in the local surface normal of the reflecting surfaces. Comparisons between the predicted and actual performance of samples optics will be shown and discussed.
Mendes Silva, Rita; Clode, Nuno
2018-01-01
External cephalic version (ECV) is a maneuver that enables the rotation of the non-cephalic fetus to a cephalic presentation. The Newman-Peacock (NP) index, which was proposed by Newman et al. in a study published in 1993, was described as a prediction tool of the success of this procedure; it was validated in a North-American population, and three prognostic groups were identified. To evaluate the value of the NP score for the prediction of a successful ECV in a Portuguese obstetrical population, and to evaluate maternal and fetal safety. We present an observational study conducted from 1997-2016 with pregnant women at 36-38 weeks of pregnancy who were candidates for external cephalic version in our department. Demographic and obstetrical data were collected, including the parameters included in the NP index (parity, cervical dilatation, estimated fetal weight, placental location and fetal station). The calculation of the NP score was performed, and the percentages of success were compared among the three prognostic groups and with the original study by Newman et al. The performance of the score was determined using the Student t -test, the Chi-squared test, and a receiver operating characteristic (ROC) curve. In total, 337 women were included. The overall success rate was of 43.6%. The univariate analysis revealed that multiparity, posterior placentation and a less engaged fetus were factors that favored a successful maneuver ( p < 0.05). Moreover, a higher amniotic fluid index was also a relevant predictive factor ( p < 0.05). The Newman-Peacock score had a poorer performance in our population compared with that of the sample of the original study, but we still found a positive relationship between higher scores and higher prediction of success ( p < 0.001). No fetal or maternal morbidities were registered. The Newman-Peacock score had a poorer performance among our population compared to its performance in the original study, but the results suggest that this score is still a useful tool to guide our clinical practice and counsel the candidate regarding ECV. Thieme Revinter Publicações Ltda Rio de Janeiro, Brazil.
Gredlein, Jeffrey M; Bjorklund, David F
2005-06-01
Three-year-old children were observed in two free-play sessions and participated in a toy-retrieval task, in which only one of six tools could be used to retrieve an out-of-reach toy. Boys engaged in more object-oriented play than girls and were more likely to use tools to retrieve the toy during the baseline tool-use task. All children who did not retrieve the toy during the baseline trials did so after being given a hint, and performance on a transfer-of-training tool-use task approached ceiling levels. This suggests that the sex difference in tool use observed during the baseline phase does not reflect a difference in competency, but rather a sex difference in motivation to interact with objects. Amount of time boys, but not girls, spent in object-oriented play during the free-play sessions predicted performance on the tool-use task. The findings are interpreted in terms of evolutionary theory, consistent with the idea that boys' and girls' play styles evolved to prepare them for adult life in traditional environments.
A tool for modeling concurrent real-time computation
NASA Technical Reports Server (NTRS)
Sharma, D. D.; Huang, Shie-Rei; Bhatt, Rahul; Sridharan, N. S.
1990-01-01
Real-time computation is a significant area of research in general, and in AI in particular. The complexity of practical real-time problems demands use of knowledge-based problem solving techniques while satisfying real-time performance constraints. Since the demands of a complex real-time problem cannot be predicted (owing to the dynamic nature of the environment) powerful dynamic resource control techniques are needed to monitor and control the performance. A real-time computation model for a real-time tool, an implementation of the QP-Net simulator on a Symbolics machine, and an implementation on a Butterfly multiprocessor machine are briefly described.
Selection into medical school: from tools to domains.
Wilkinson, Tom M; Wilkinson, Tim J
2016-10-03
Most research into the validity of admissions tools focuses on the isolated correlations of individual tools with later outcomes. Instead, looking at how domains of attributes, rather than tools, predict later success is likely to be more generalizable. We aim to produce a blueprint for an admissions scheme that is broadly relevant across institutions. We broke down all measures used for admissions at one medical school into the smallest possible component scores. We grouped these into domains on the basis of a multicollinearity analysis, and conducted a regression analysis to determine the independent validity of each domain to predict outcomes of interest. We identified four broad domains: logical reasoning and problem solving, understanding people, communication skills, and biomedical science. Each was independently and significantly associated with performance in final medical school examinations. We identified two potential errors in the design of admissions schema that can undermine their validity: focusing on tools rather than outcomes, and including a wide range of measures without objectively evaluating the independent contribution of each. Both could be avoided by following a process of programmatic assessment for selection.
ERIC Educational Resources Information Center
Musso, Mariel F.; Kyndt, Eva; Cascallar, Eduardo C.; Dochy, Filip
2013-01-01
Many studies have explored the contribution of different factors from diverse theoretical perspectives to the explanation of academic performance. These factors have been identified as having important implications not only for the study of learning processes, but also as tools for improving curriculum designs, tutorial systems, and students'…
Separation analysis, a tool for analyzing multigrid algorithms
NASA Technical Reports Server (NTRS)
Costiner, Sorin; Taasan, Shlomo
1995-01-01
The separation of vectors by multigrid (MG) algorithms is applied to the study of convergence and to the prediction of the performance of MG algorithms. The separation operator for a two level cycle algorithm is derived. It is used to analyze the efficiency of the cycle when mixing of eigenvectors occurs. In particular cases the separation analysis reduces to Fourier type analysis. The separation operator of a two level cycle for a Schridubger eigenvalue problem, is derived and analyzed in a Fourier basis. Separation analysis gives information on how to choose performance relaxations and inter-level transfers. Separation analysis is a tool for analyzing and designing algorithms, and for optimizing their performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zi-Kui; Gleeson, Brian; Shang, Shunli
This project developed computational tools that can complement and support experimental efforts in order to enable discovery and more efficient development of Ni-base structural materials and coatings. The project goal was reached through an integrated computation-predictive and experimental-validation approach, including first-principles calculations, thermodynamic CALPHAD (CALculation of PHAse Diagram), and experimental investigations on compositions relevant to Ni-base superalloys and coatings in terms of oxide layer growth and microstructure stabilities. The developed description included composition ranges typical for coating alloys and, hence, allow for prediction of thermodynamic properties for these material systems. The calculation of phase compositions, phase fraction, and phase stabilities,more » which are directly related to properties such as ductility and strength, was a valuable contribution, along with the collection of computational tools that are required to meet the increasing demands for strong, ductile and environmentally-protective coatings. Specifically, a suitable thermodynamic description for the Ni-Al-Cr-Co-Si-Hf-Y system was developed for bulk alloy and coating compositions. Experiments were performed to validate and refine the thermodynamics from the CALPHAD modeling approach. Additionally, alloys produced using predictions from the current computational models were studied in terms of their oxidation performance. Finally, results obtained from experiments aided in the development of a thermodynamic modeling automation tool called ESPEI/pycalphad - for more rapid discovery and development of new materials.« less
Akseli, Ilgaz; Xie, Jingjin; Schultz, Leon; Ladyzhynsky, Nadia; Bramante, Tommasina; He, Xiaorong; Deanne, Rich; Horspool, Keith R; Schwabe, Robert
2017-01-01
Enabling the paradigm of quality by design requires the ability to quantitatively correlate material properties and process variables to measureable product performance attributes. Conventional, quality-by-test methods for determining tablet breaking force and disintegration time usually involve destructive tests, which consume significant amount of time and labor and provide limited information. Recent advances in material characterization, statistical analysis, and machine learning have provided multiple tools that have the potential to develop nondestructive, fast, and accurate approaches in drug product development. In this work, a methodology to predict the breaking force and disintegration time of tablet formulations using nondestructive ultrasonics and machine learning tools was developed. The input variables to the model include intrinsic properties of formulation and extrinsic process variables influencing the tablet during manufacturing. The model has been applied to predict breaking force and disintegration time using small quantities of active pharmaceutical ingredient and prototype formulation designs. The novel approach presented is a step forward toward rational design of a robust drug product based on insight into the performance of common materials during formulation and process development. It may also help expedite drug product development timeline and reduce active pharmaceutical ingredient usage while improving efficiency of the overall process. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Yehya, Nadir; Wong, Hector R
2018-01-01
The original Pediatric Sepsis Biomarker Risk Model and revised (Pediatric Sepsis Biomarker Risk Model-II) biomarker-based risk prediction models have demonstrated utility for estimating baseline 28-day mortality risk in pediatric sepsis. Given the paucity of prediction tools in pediatric acute respiratory distress syndrome, and given the overlapping pathophysiology between sepsis and acute respiratory distress syndrome, we tested the utility of Pediatric Sepsis Biomarker Risk Model and Pediatric Sepsis Biomarker Risk Model-II for mortality prediction in a cohort of pediatric acute respiratory distress syndrome, with an a priori plan to revise the model if these existing models performed poorly. Prospective observational cohort study. University affiliated PICU. Mechanically ventilated children with acute respiratory distress syndrome. Blood collection within 24 hours of acute respiratory distress syndrome onset and biomarker measurements. In 152 children with acute respiratory distress syndrome, Pediatric Sepsis Biomarker Risk Model performed poorly and Pediatric Sepsis Biomarker Risk Model-II performed modestly (areas under receiver operating characteristic curve of 0.61 and 0.76, respectively). Therefore, we randomly selected 80% of the cohort (n = 122) to rederive a risk prediction model for pediatric acute respiratory distress syndrome. We used classification and regression tree methodology, considering the Pediatric Sepsis Biomarker Risk Model biomarkers in addition to variables relevant to acute respiratory distress syndrome. The final model was comprised of three biomarkers and age, and more accurately estimated baseline mortality risk (area under receiver operating characteristic curve 0.85, p < 0.001 and p = 0.053 compared with Pediatric Sepsis Biomarker Risk Model and Pediatric Sepsis Biomarker Risk Model-II, respectively). The model was tested in the remaining 20% of subjects (n = 30) and demonstrated similar test characteristics. A validated, biomarker-based risk stratification tool designed for pediatric sepsis was adapted for use in pediatric acute respiratory distress syndrome. The newly derived Pediatric Acute Respiratory Distress Syndrome Biomarker Risk Model demonstrates good test characteristics internally and requires external validation in a larger cohort. Tools such as Pediatric Acute Respiratory Distress Syndrome Biomarker Risk Model have the potential to provide improved risk stratification and prognostic enrichment for future trials in pediatric acute respiratory distress syndrome.
Improved analysis tool for concrete pavement : [project summary].
DOT National Transportation Integrated Search
2017-10-01
University of Florida researchers developed 3D-FE models to more accurately predict the behavior of concrete slabs. They also followed up on a project to characterize strain gauge performance for a Florida Department of Transportation (FDOT) concrete...
Sea-air boundary meteorological sensor
NASA Astrophysics Data System (ADS)
Barbosa, Jose G.
2015-05-01
The atmospheric environment can significantly affect radio frequency and optical propagation. In the RF spectrum refraction and ducting can degrade or enhance communications and radar coverage. Platforms in or beneath refractive boundaries can exploit the benefits or suffer the effects of the atmospheric boundary layers. Evaporative ducts and surface-base ducts are of most concern for ocean surface platforms and evaporative ducts are almost always present along the sea-air interface. The atmospheric environment also degrades electro-optical systems resolution and visibility. The atmospheric environment has been proven not to be uniform and under heterogeneous conditions substantial propagation errors may be present for large distances from homogeneous models. An accurate and portable atmospheric sensor to profile the vertical index of refraction is needed for mission planning, post analysis, and in-situ performance assessment. The meteorological instrument used in conjunction with a radio frequency and electro-optical propagation prediction tactical decision aid tool would give military platforms, in real time, the ability to make assessments on communication systems propagation ranges, radar detection and vulnerability ranges, satellite communications vulnerability, laser range finder performance, and imaging system performance predictions. Raman lidar has been shown to be capable of measuring the required atmospheric parameters needed to profile the atmospheric environment. The atmospheric profile could then be used as input to a tactical decision aid tool to make propagation predictions.
Fischer, John P; Nelson, Jonas A; Shang, Eric K; Wink, Jason D; Wingate, Nicholas A; Woo, Edward Y; Jackson, Benjamin M; Kovach, Stephen J; Kanchwala, Suhail
2014-12-01
Groin wound complications after open vascular surgery procedures are common, morbid, and costly. The purpose of this study was to generate a simple, validated, clinically usable risk assessment tool for predicting groin wound morbidity after infra-inguinal vascular surgery. A retrospective review of consecutive patients undergoing groin cutdowns for femoral access between 2005-2011 was performed. Patients necessitating salvage flaps were compared to those who did not, and a stepwise logistic regression was performed and validated using a bootstrap technique. Utilising this analysis, a simplified risk score was developed to predict the risk of developing a wound which would necessitate salvage. A total of 925 patients were included in the study. The salvage flap rate was 11.2% (n = 104). Predictors determined by logistic regression included prior groin surgery (OR = 4.0, p < 0.001), prosthetic graft (OR = 2.7, p < 0.001), coronary artery disease (OR = 1.8, p = 0.019), peripheral arterial disease (OR = 5.0, p < 0.001), and obesity (OR = 1.7, p = 0.039). Based upon the respective logistic coefficients, a simplified scoring system was developed to enable the preoperative risk stratification regarding the likelihood of a significant complication which would require a salvage muscle flap. The c-statistic for the regression demonstrated excellent discrimination at 0.89. This study presents a simple, internally validated risk assessment tool that accurately predicts wound morbidity requiring flap salvage in open groin vascular surgery patients. The preoperatively high-risk patient can be identified and selectively targeted as a candidate for a prophylactic muscle flap.
The malnutrition screening tool versus objective measures to detect malnutrition in hip fracture.
Bell, J J; Bauer, J D; Capra, S
2013-12-01
The Malnutrition Screening Tool (MST) is the most commonly used screening tool in Australia. Poor screening tool sensitivity may lead to an under-diagnosis of malnutrition, with potential patient and economic ramifications. The present study aimed to determine whether the MST or anthropometric parameters adequately detect malnutrition in patients who were admitted to a hip fracture unit. Data were analysed for a prospective convenience sample (n = 100). MST screening was independently undertaken by nursing staff and a nutrition assistant. Mid upper arm circumference (MUAC) was measured by a trained nutrition assistant. Nutritional risk [MST score ≥ 2, body mass index (BMI) < 22 kg m(-2) , or MUAC < 25 cm] was compared with malnutrition diagnosed by accredited practicing dietitians using International Classification of Diseases version 10-Australian Modification (ICD10-AM) coding criteria. Malnutrition prevalence was 37.5% using ICD10-AM criteria. Delirium, dementia or preadmission cognitive impairment was present in 65% of patients. The BMI as a nutrition risk screen was the most valid predictor of malnutrition (sensitivity 75%; specificity 93%; positive predictive value 73%; negative predictive value 84%). Nursing MST screening was the least valid (sensitivity 73%; specificity 55%; positive predictive value 50%; negative predictive value 77%). There was only fair agreement between nursing and nutrition assistant screening using the MST (κ = 0.28). In this population with a high prevalence of delirium and dementia, further investigation is warranted into the performance of nutrition screening tools and anthropometric parameters such as BMI. All tools failed to predict a considerable number of patients with malnutrition. This may result in the under-diagnosis and treatment of malnutrition, leading to case-mix funding losses. © 2013 The Authors Journal of Human Nutrition and Dietetics © 2013 The British Dietetic Association Ltd.
Whole-Genome Thermodynamic Analysis Reduces siRNA Off-Target Effects
Chen, Xi; Liu, Peng; Chou, Hui-Hsien
2013-01-01
Small interfering RNAs (siRNAs) are important tools for knocking down targeted genes, and have been widely applied to biological and biomedical research. To design siRNAs, two important aspects must be considered: the potency in knocking down target genes and the off-target effect on any nontarget genes. Although many studies have produced useful tools to design potent siRNAs, off-target prevention has mostly been delegated to sequence-level alignment tools such as BLAST. We hypothesize that whole-genome thermodynamic analysis can identify potential off-targets with higher precision and help us avoid siRNAs that may have strong off-target effects. To validate this hypothesis, two siRNA sets were designed to target three human genes IDH1, ITPR2 and TRIM28. They were selected from the output of two popular siRNA design tools, siDirect and siDesign. Both siRNA design tools have incorporated sequence-level screening to avoid off-targets, thus their output is believed to be optimal. However, one of the sets we tested has off-target genes predicted by Picky, a whole-genome thermodynamic analysis tool. Picky can identify off-target genes that may hybridize to a siRNA within a user-specified melting temperature range. Our experiments validated that some off-target genes predicted by Picky can indeed be inhibited by siRNAs. Similar experiments were performed using commercially available siRNAs and a few off-target genes were also found to be inhibited as predicted by Picky. In summary, we demonstrate that whole-genome thermodynamic analysis can identify off-target genes that are missed in sequence-level screening. Because Picky prediction is deterministic according to thermodynamics, if a siRNA candidate has no Picky predicted off-targets, it is unlikely to cause off-target effects. Therefore, we recommend including Picky as an additional screening step in siRNA design. PMID:23484018
ConoDictor: a tool for prediction of conopeptide superfamilies.
Koua, Dominique; Brauer, Age; Laht, Silja; Kaplinski, Lauris; Favreau, Philippe; Remm, Maido; Lisacek, Frédérique; Stöcklin, Reto
2012-07-01
ConoDictor is a tool that enables fast and accurate classification of conopeptides into superfamilies based on their amino acid sequence. ConoDictor combines predictions from two complementary approaches-profile hidden Markov models and generalized profiles. Results appear in a browser as tables that can be downloaded in various formats. This application is particularly valuable in view of the exponentially increasing number of conopeptides that are being identified. ConoDictor was written in Perl using the common gateway interface module with a php submission page. Sequence matching is performed with hmmsearch from HMMER 3 and ps_scan.pl from the pftools 2.3 package. ConoDictor is freely accessible at http://conco.ebc.ee.
Designing a Pediatric Severe Sepsis Screening Tool
Sepanski, Robert J.; Godambe, Sandip A.; Mangum, Christopher D.; Bovat, Christine S.; Zaritsky, Arno L.; Shah, Samir H.
2014-01-01
We sought to create a screening tool with improved predictive value for pediatric severe sepsis (SS) and septic shock that can be incorporated into the electronic medical record and actively screen all patients arriving at a pediatric emergency department (ED). “Gold standard” SS cases were identified using a combination of coded discharge diagnosis and physician chart review from 7,402 children who visited a pediatric ED over 2 months. The tool’s identification of SS was initially based on International Consensus Conference on Pediatric Sepsis (ICCPS) parameters that were refined by an iterative, virtual process that allowed us to propose successive changes in sepsis detection parameters in order to optimize the tool’s predictive value based on receiver operating characteristics (ROC). Age-specific normal and abnormal values for heart rate (HR) and respiratory rate (RR) were empirically derived from 143,603 children seen in a second pediatric ED over 3 years. Univariate analyses were performed for each measure in the tool to assess its association with SS and to characterize it as an “early” or “late” indicator of SS. A split-sample was used to validate the final, optimized tool. The final tool incorporated age-specific thresholds for abnormal HR and RR and employed a linear temperature correction for each category. The final tool’s positive predictive value was 48.7%, a significant, nearly threefold improvement over the original ICCPS tool. False positive systemic inflammatory response syndrome identifications were nearly sixfold lower. PMID:24982852
Tools for assessing fall risk in the elderly: a systematic review and meta-analysis.
Park, Seong-Hi
2018-01-01
The prevention of falls among the elderly is arguably one of the most important public health issues in today's aging society. The aim of this study was to assess which tools best predict the risk of falls in the elderly. Electronic searches were performed using Medline, EMBASE, the Cochrane Library, CINAHL, etc., using the following keywords: "fall risk assessment", "elderly fall screening", and "elderly mobility scale". The QUADAS-2 was applied to assess the internal validity of the diagnostic studies. Selected studies were meta-analyzed with MetaDisc 1.4. A total of 33 studies were eligible out of the 2,321 studies retrieved from selected databases. Twenty-six assessment tools for fall risk were used in the selected articles, and they tended to vary based on the setting. The fall risk assessment tools currently used for the elderly did not show sufficiently high predictive validity for differentiating high and low fall risks. The Berg Balance scale and Mobility Interaction Fall chart showed stable and high specificity, while the Downton Fall Risk Index, Hendrich II Fall Risk Model, St. Thomas's Risk Assessment Tool in Falling elderly inpatients, Timed Up and Go test, and Tinetti Balance scale showed the opposite results. We concluded that rather than a single measure, two assessment tools used together would better evaluate the characteristics of falls by the elderly that can occur due to a multitude of factors and maximize the advantages of each for predicting the occurrence of falls.
Integrated Computational Solution for Predicting Skin Sensitization Potential of Molecules
Desai, Aarti; Singh, Vivek K.; Jere, Abhay
2016-01-01
Introduction Skin sensitization forms a major toxicological endpoint for dermatology and cosmetic products. Recent ban on animal testing for cosmetics demands for alternative methods. We developed an integrated computational solution (SkinSense) that offers a robust solution and addresses the limitations of existing computational tools i.e. high false positive rate and/or limited coverage. Results The key components of our solution include: QSAR models selected from a combinatorial set, similarity information and literature-derived sub-structure patterns of known skin protein reactive groups. Its prediction performance on a challenge set of molecules showed accuracy = 75.32%, CCR = 74.36%, sensitivity = 70.00% and specificity = 78.72%, which is better than several existing tools including VEGA (accuracy = 45.00% and CCR = 54.17% with ‘High’ reliability scoring), DEREK (accuracy = 72.73% and CCR = 71.44%) and TOPKAT (accuracy = 60.00% and CCR = 61.67%). Although, TIMES-SS showed higher predictive power (accuracy = 90.00% and CCR = 92.86%), the coverage was very low (only 10 out of 77 molecules were predicted reliably). Conclusions Owing to improved prediction performance and coverage, our solution can serve as a useful expert system towards Integrated Approaches to Testing and Assessment for skin sensitization. It would be invaluable to cosmetic/ dermatology industry for pre-screening their molecules, and reducing time, cost and animal testing. PMID:27271321
NASA Technical Reports Server (NTRS)
DeHart, Russell
2017-01-01
This study determines the feasibility of creating a tool that can accurately predict Lunar Reconnaissance Orbiter (LRO) reaction wheel assembly (RWA) angular momentum, weeks or even months into the future. LRO is a three-axis stabilized spacecraft that was launched on June 18, 2009. While typically nadir-pointing, LRO conducts many types of slews to enable novel science collection. Momentum unloads have historically been performed approximately once every two weeks with the goal of maintaining system total angular momentum below 70 Nms; however flight experience shows the models developed before launch are overly conservative, with many momentum unloads being performed before system angular momentum surpasses 50 Nms. A more accurate model of RWA angular momentum growth would improve momentum unload scheduling and decrease the frequency of these unloads. Since some LRO instruments must be deactivated during momentum unloads and in the case of one instrument, decontaminated for 24 hours there after a decrease in the frequency of unloads increases science collection. This study develops a new model to predict LRO RWA angular momentum. Regression analysis of data from October 2014 to October 2015 was used to develop relationships between solar beta angle, slew specifications, and RWA angular momentum growth. The resulting model predicts RWA angular momentum using input solar beta angle and mission schedule data. This model was used to predict RWA angular momentum from October 2013 to October 2014. Predictions agree well with telemetry; of the 23 momentum unloads performed from October 2013 to October 2014, the mean and median magnitude of the RWA total angular momentum prediction error at the time of the momentum unloads were 3.7 and 2.7 Nms, respectively. The magnitude of the largest RWA total angular momentum prediction error was 10.6 Nms. Development of a tool that uses the models presented herein is currently underway.
NASA Astrophysics Data System (ADS)
Sahu, Neelesh Kumar; Andhare, Atul B.; Andhale, Sandip; Raju Abraham, Roja
2018-04-01
Present work deals with prediction of surface roughness using cutting parameters along with in-process measured cutting force and tool vibration (acceleration) during turning of Ti-6Al-4V with cubic boron nitride (CBN) inserts. Full factorial design is used for design of experiments using cutting speed, feed rate and depth of cut as design variables. Prediction model for surface roughness is developed using response surface methodology with cutting speed, feed rate, depth of cut, resultant cutting force and acceleration as control variables. Analysis of variance (ANOVA) is performed to find out significant terms in the model. Insignificant terms are removed after performing statistical test using backward elimination approach. Effect of each control variables on surface roughness is also studied. Correlation coefficient (R2 pred) of 99.4% shows that model correctly explains the experiment results and it behaves well even when adjustment is made in factors or new factors are added or eliminated. Validation of model is done with five fresh experiments and measured forces and acceleration values. Average absolute error between RSM model and experimental measured surface roughness is found to be 10.2%. Additionally, an artificial neural network model is also developed for prediction of surface roughness. The prediction results of modified regression model are compared with ANN. It is found that RSM model and ANN (average absolute error 7.5%) are predicting roughness with more than 90% accuracy. From the results obtained it is found that including cutting force and vibration for prediction of surface roughness gives better prediction than considering only cutting parameters. Also, ANN gives better prediction over RSM models.
Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB.
Lee, Leng-Feng; Umberger, Brian R
2016-01-01
Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1-2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility.
Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB
Lee, Leng-Feng
2016-01-01
Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1–2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility. PMID:26835184
Taxi Time Prediction at Charlotte Airport Using Fast-Time Simulation and Machine Learning Techniques
NASA Technical Reports Server (NTRS)
Lee, Hanbong
2016-01-01
Accurate taxi time prediction is required for enabling efficient runway scheduling that can increase runway throughput and reduce taxi times and fuel consumptions on the airport surface. Currently NASA and American Airlines are jointly developing a decision-support tool called Spot and Runway Departure Advisor (SARDA) that assists airport ramp controllers to make gate pushback decisions and improve the overall efficiency of airport surface traffic. In this presentation, we propose to use Linear Optimized Sequencing (LINOS), a discrete-event fast-time simulation tool, to predict taxi times and provide the estimates to the runway scheduler in real-time airport operations. To assess its prediction accuracy, we also introduce a data-driven analytical method using machine learning techniques. These two taxi time prediction methods are evaluated with actual taxi time data obtained from the SARDA human-in-the-loop (HITL) simulation for Charlotte Douglas International Airport (CLT) using various performance measurement metrics. Based on the taxi time prediction results, we also discuss how the prediction accuracy can be affected by the operational complexity at this airport and how we can improve the fast time simulation model before implementing it with an airport scheduling algorithm in a real-time environment.
Hegade, Ravindra Suryakant; De Beer, Maarten; Lynen, Frederic
2017-09-15
Chiral Stationary-Phase Optimized Selectivity Liquid Chromatography (SOSLC) is proposed as a tool to optimally separate mixtures of enantiomers on a set of commercially available coupled chiral columns. This approach allows for the prediction of the separation profiles on any possible combination of the chiral stationary phases based on a limited number of preliminary analyses, followed by automated selection of the optimal column combination. Both the isocratic and gradient SOSLC approach were implemented for prediction of the retention times for a mixture of 4 chiral pairs on all possible combinations of the 5 commercial chiral columns. Predictions in isocratic and gradient mode were performed with a commercially available and with an in-house developed Microsoft visual basic algorithm, respectively. Optimal predictions in the isocratic mode required the coupling of 4 columns whereby relative deviations between the predicted and experimental retention times ranged between 2 and 7%. Gradient predictions led to the coupling of 3 chiral columns allowing baseline separation of all solutes, whereby differences between predictions and experiments ranged between 0 and 12%. The methodology is a novel tool allowing optimizing the separation of mixtures of optical isomers. Copyright © 2017 Elsevier B.V. All rights reserved.
Lohr, Kristine M; Clauser, Amanda; Hess, Brian J; Gelber, Allan C; Valeriano-Marcet, Joanne; Lipner, Rebecca S; Haist, Steven A; Hawley, Janine L; Zirkle, Sarah; Bolster, Marcy B
2015-11-01
The American College of Rheumatology (ACR) Adult Rheumatology In-Training Examination (ITE) is a feedback tool designed to identify strengths and weaknesses in the content knowledge of individual fellows-in-training and the training program curricula. We determined whether scores on the ACR ITE, as well as scores on other major standardized medical examinations and competency-based ratings, could be used to predict performance on the American Board of Internal Medicine (ABIM) Rheumatology Certification Examination. Between 2008 and 2012, 629 second-year fellows took the ACR ITE. Bivariate correlation analyses of assessment scores and multiple linear regression analyses were used to determine whether ABIM Rheumatology Certification Examination scores could be predicted on the basis of ACR ITE scores, United States Medical Licensing Examination scores, ABIM Internal Medicine Certification Examination scores, fellowship directors' ratings of overall clinical competency, and demographic variables. Logistic regression was used to evaluate whether these assessments were predictive of a passing outcome on the Rheumatology Certification Examination. In the initial linear model, the strongest predictors of the Rheumatology Certification Examination score were the second-year fellows' ACR ITE scores (β = 0.438) and ABIM Internal Medicine Certification Examination scores (β = 0.273). Using a stepwise model, the strongest predictors of higher scores on the Rheumatology Certification Examination were second-year fellows' ACR ITE scores (β = 0.449) and ABIM Internal Medicine Certification Examination scores (β = 0.276). Based on the findings of logistic regression analysis, ACR ITE performance was predictive of a pass/fail outcome on the Rheumatology Certification Examination (odds ratio 1.016 [95% confidence interval 1.011-1.021]). The predictive value of the ACR ITE score with regard to predicting performance on the Rheumatology Certification Examination supports use of the Adult Rheumatology ITE as a valid feedback tool during fellowship training. © 2015, American College of Rheumatology.
Benchmarking CRISPR on-target sgRNA design.
Yan, Jifang; Chuai, Guohui; Zhou, Chi; Zhu, Chenyu; Yang, Jing; Zhang, Chao; Gu, Feng; Xu, Han; Wei, Jia; Liu, Qi
2017-02-15
CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats)-based gene editing has been widely implemented in various cell types and organisms. A major challenge in the effective application of the CRISPR system is the need to design highly efficient single-guide RNA (sgRNA) with minimal off-target cleavage. Several tools are available for sgRNA design, while limited tools were compared. In our opinion, benchmarking the performance of the available tools and indicating their applicable scenarios are important issues. Moreover, whether the reported sgRNA design rules are reproducible across different sgRNA libraries, cell types and organisms remains unclear. In our study, a systematic and unbiased benchmark of the sgRNA predicting efficacy was performed on nine representative on-target design tools, based on six benchmark data sets covering five different cell types. The benchmark study presented here provides novel quantitative insights into the available CRISPR tools. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Modelling of tunnelling processes and rock cutting tool wear with the particle finite element method
NASA Astrophysics Data System (ADS)
Carbonell, Josep Maria; Oñate, Eugenio; Suárez, Benjamín
2013-09-01
Underground construction involves all sort of challenges in analysis, design, project and execution phases. The dimension of tunnels and their structural requirements are growing, and so safety and security demands do. New engineering tools are needed to perform a safer planning and design. This work presents the advances in the particle finite element method (PFEM) for the modelling and the analysis of tunneling processes including the wear of the cutting tools. The PFEM has its foundation on the Lagrangian description of the motion of a continuum built from a set of particles with known physical properties. The method uses a remeshing process combined with the alpha-shape technique to detect the contacting surfaces and a finite element method for the mechanical computations. A contact procedure has been developed for the PFEM which is combined with a constitutive model for predicting the excavation front and the wear of cutting tools. The material parameters govern the coupling of frictional contact and wear between the interacting domains at the excavation front. The PFEM allows predicting several parameters which are relevant for estimating the performance of a tunnelling boring machine such as wear in the cutting tools, the pressure distribution on the face of the boring machine and the vibrations produced in the machinery and the adjacent soil/rock. The final aim is to help in the design of the excavating tools and in the planning of the tunnelling operations. The applications presented show that the PFEM is a promising technique for the analysis of tunnelling problems.
A critical assessment of topologically associating domain prediction tools
Dali, Rola
2017-01-01
Abstract Topologically associating domains (TADs) have been proposed to be the basic unit of chromosome folding and have been shown to play key roles in genome organization and gene regulation. Several different tools are available for TAD prediction, but their properties have never been thoroughly assessed. In this manuscript, we compare the output of seven different TAD prediction tools on two published Hi-C data sets. TAD predictions varied greatly between tools in number, size distribution and other biological properties. Assessed against a manual annotation of TADs, individual TAD boundary predictions were found to be quite reliable, but their assembly into complete TAD structures was much less so. In addition, many tools were sensitive to sequencing depth and resolution of the interaction frequency matrix. This manuscript provides users and designers of TAD prediction tools with information that will help guide the choice of tools and the interpretation of their predictions. PMID:28334773
ERIC Educational Resources Information Center
McLoughlin, M. Padraig M. M.; Bluford, Dontrell A.
2004-01-01
This study investigated the predictive validity of the Descriptive Tests of Mathematical Skills (DTMS) and the SAT-Mathematics (SAT-M) tests as placement tools for entering students in a small, liberal arts, historically black institution (HBI) using regression analysis. The placement schema is four-tiered: for a remedial algebra course, college…
A simple method to predict regional fish abundance: an example in the McKenzie River Basin, Oregon
D.J. McGarvey; J.M. Johnston
2011-01-01
Regional assessments of fisheries resources are increasingly called for, but tools with which to perform them are limited. We present a simple method that can be used to estimate regional carrying capacity and apply it to the McKenzie River Basin, Oregon. First, we use a macroecological model to predict trout densities within small, medium, and large streams in the...
Initiating an Online Reputation Monitoring System with Open Source Analytics Tools
NASA Astrophysics Data System (ADS)
Shuhud, Mohd Ilias M.; Alwi, Najwa Hayaati Md; Halim, Azni Haslizan Abd
2018-05-01
Online reputation is an invaluable asset for modern organizations as it can help in business performance especially in sales and profit. However, if we are not aware of our reputation, it is difficult to maintain it. Thus, social media analytics is a new tool that can provide online reputation monitoring in various ways such as sentiment analysis. As a result, numerous large-scale organizations have implemented Online Reputation Monitoring (ORM) systems. However, this solution is not supposed to be exclusively for high-income organizations, as many organizations regardless sizes and types are now online. This research attempts to propose an affordable and reliable ORM system using combination of open source analytics tools for both novice practitioners and academicians. We also evaluate its prediction accuracy and we discovered that the system provides acceptable predictions (sixty percent accuracy) and demonstrate a tally prediction of major polarity by human annotation. The proposed system can help in supporting business decisions with flexible monitoring strategies especially for organization that want to initiate and administrate ORM themselves at low cost.
Augmenting the SCaN Link Budget Tool with Validated Atmospheric Propagation
NASA Technical Reports Server (NTRS)
Steinkerchner, Leo; Welch, Bryan
2017-01-01
In any Earth-Space or Space-Earth communications link, atmospheric effects cause significant signal attenuation. In order to develop a communications system that is cost effective while meeting appropriate performance requirements, it is important to accurately predict these effects for the given link parameters. This project aimed to develop a Matlab(TradeMark) (The MathWorks, Inc.) program that could augment the existing Space Communications and Navigation (SCaN) Link Budget Tool with accurate predictions of atmospheric attenuation of both optical and radio-frequency signals according to the SCaN Optical Link Assessment Model Version 5 and the International Telecommunications Union, Radiocommunications Sector (ITU-R) atmospheric propagation loss model, respectively. When compared to data collected from the Advance Communications Technology Satellite (ACTS), the radio-frequency model predicted attenuation to within 1.3 dB of loss for 95 of measurements. Ultimately, this tool will be integrated into the SCaN Center for Engineering, Networks, Integration, and Communications (SCENIC) user interface in order to support analysis of existing SCaN systems and planning capabilities for future NASA missions.
New support vector machine-based method for microRNA target prediction.
Li, L; Gao, Q; Mao, X; Cao, Y
2014-06-09
MicroRNA (miRNA) plays important roles in cell differentiation, proliferation, growth, mobility, and apoptosis. An accurate list of precise target genes is necessary in order to fully understand the importance of miRNAs in animal development and disease. Several computational methods have been proposed for miRNA target-gene identification. However, these methods still have limitations with respect to their sensitivity and accuracy. Thus, we developed a new miRNA target-prediction method based on the support vector machine (SVM) model. The model supplies information of two binding sites (primary and secondary) for a radial basis function kernel as a similarity measure for SVM features. The information is categorized based on structural, thermodynamic, and sequence conservation. Using high-confidence datasets selected from public miRNA target databases, we obtained a human miRNA target SVM classifier model with high performance and provided an efficient tool for human miRNA target gene identification. Experiments have shown that our method is a reliable tool for miRNA target-gene prediction, and a successful application of an SVM classifier. Compared with other methods, the method proposed here improves the sensitivity and accuracy of miRNA prediction. Its performance can be further improved by providing more training examples.
Mapping The Temporal and Spatial Variability of Soil Moisture Content Using Proximal Soil Sensing
NASA Astrophysics Data System (ADS)
Virgawati, S.; Mawardi, M.; Sutiarso, L.; Shibusawa, S.; Segah, H.; Kodaira, M.
2018-05-01
In studies related to soil optical properties, it has been proven that visual and NIR soil spectral response can predict soil moisture content (SMC) using proper data analysis techniques. SMC is one of the most important soil properties influencing most physical, chemical, and biological soil processes. The problem is how to provide reliable, fast and inexpensive information of SMC in the subsurface from numerous soil samples and repeated measurement. The use of spectroscopy technology has emerged as a rapid and low-cost tool for extensive investigation of soil properties. The objective of this research was to develop calibration models based on laboratory Vis-NIR spectroscopy to estimate the SMC at four different growth stages of the soybean crop in Yogyakarta Province. An ASD Field-spectrophotoradiometer was used to measure the reflectance of soil samples. The partial least square regression (PLSR) was performed to establish the relationship between the SMC with Vis-NIR soil reflectance spectra. The selected calibration model was used to predict the new samples of SMC. The temporal and spatial variability of SMC was performed in digital maps. The results revealed that the calibration model was excellent for SMC prediction. Vis-NIR spectroscopy was a reliable tool for the prediction of SMC.
Prediction of Thermal Fatigue in Tooling for Die-casting Copper via Finite Element Analysis
NASA Astrophysics Data System (ADS)
Sakhuja, Amit; Brevick, Jerald R.
2004-06-01
Recent research by the Copper Development Association (CDA) has demonstrated the feasibility of die-casting electric motor rotors using copper. Electric motors using copper rotors are significantly more energy efficient relative to motors using aluminum rotors. However, one of the challenges in copper rotor die-casting is low tool life. Experiments have shown that the higher molten metal temperature of copper (1085 °C), as compared to aluminum (660 °C) accelerates the onset of thermal fatigue or heat checking in traditional H-13 tool steel. This happens primarily because the mechanical properties of H-13 tool steel decrease significantly above 650 °C. Potential approaches to mitigate the heat checking problem include: 1) identification of potential tool materials having better high temperature mechanical properties than H-13, and 2) reduction of the magnitude of cyclic thermal excursions experienced by the tooling by increasing the bulk die temperature. A preliminary assessment of alternative tool materials has led to the selection of nickel-based alloys Haynes 230 and Inconel 617 as potential candidates. These alloys were selected based on their elevated temperature physical and mechanical properties. Therefore, the overall objective of this research work was to predict the number of copper rotor die-casting cycles to the onset of heat checking (tool life) as a function of bulk die temperature (up to 650 °C) for Haynes 230 and Inconel 617 alloys. To achieve these goals, a 2D thermo-mechanical FEA was performed to evaluate strain ranges on selected die surfaces. The method of Universal Slopes (Strain Life Method) was then employed for thermal fatigue life predictions.
In response to 'Can sugars be produced from fatty acids? A test case for pathway analysis tools'.
Faust, Karoline; Croes, Didier; van Helden, Jacques
2009-12-01
In their article entitled 'Can sugars be produced from fatty acids? A test case for pathway analysis tools' de Figueiredo and co-authors assess the performance of three pathway prediction tools (METATOOL, PathFinding and Pathway Hunter Tool) using the synthesis of glucose-6-phosphate (G6P) from acetyl-CoA in humans as a test case. We think that this article is biased for three reasons: (i) the metabolic networks used as input for the respective tools were of very different sizes; (ii) the 'assessment' is restricted to two study cases; (iii) developers are inherently more skilled to use their own tools than those developed by other people. We extended the analyses led by de Figueiredo and clearly show that the apparent superior performance of their tool (METATOOL) is partly due to the differences in input network sizes. We also see a conceptual problem in the comparison of tools that serve different purposes. In our opinion, metabolic path finding and elementary mode analysis are answering different biological questions, and should be considered as complementary rather than competitive approaches. Supplementary data are available at Bioinformatics online.
Analytical Modeling and Performance Prediction of Remanufactured Gearbox Components
NASA Astrophysics Data System (ADS)
Pulikollu, Raja V.; Bolander, Nathan; Vijayakar, Sandeep; Spies, Matthew D.
Gearbox components operate in extreme environments, often leading to premature removal or overhaul. Though worn or damaged, these components still have the ability to function given the appropriate remanufacturing processes are deployed. Doing so reduces a significant amount of resources (time, materials, energy, manpower) otherwise required to produce a replacement part. Unfortunately, current design and analysis approaches require extensive testing and evaluation to validate the effectiveness and safety of a component that has been used in the field then processed outside of original OEM specification. To test all possible combination of component coupled with various levels of potential damage repaired through various options of processing would be an expensive and time consuming feat, thus prohibiting a broad deployment of remanufacturing processes across industry. However, such evaluation and validation can occur through Integrated Computational Materials Engineering (ICME) modeling and simulation. Sentient developed a microstructure-based component life prediction (CLP) tool to quantify and assist gearbox components remanufacturing process. This was achieved by modeling the design-manufacturing-microstructure-property relationship. The CLP tool assists in remanufacturing of high value, high demand rotorcraft, automotive and wind turbine gears and bearings. This paper summarizes the CLP models development, and validation efforts by comparing the simulation results with rotorcraft spiral bevel gear physical test data. CLP analyzes gear components and systems for safety, longevity, reliability and cost by predicting (1) New gearbox component performance, and optimal time-to-remanufacture (2) Qualification of used gearbox components for remanufacturing process (3) Predicting the remanufactured component performance.
2015-10-14
such as timely short naps and caffeine, are often used to mitigate the effects of sleep loss on performance. However, the timing, duration, and dosage...loss and the restorative effects of different dosages of caffeine on a specific individual’s performance. When used as a decision-aid tool, this model...provides the means to maximize Warfighter cognitive performance, resulting in peak alertness and prolonged alertness at the desired times
New Integrated Modeling Capabilities: MIDAS' Recent Behavioral Enhancements
NASA Technical Reports Server (NTRS)
Gore, Brian F.; Jarvis, Peter A.
2005-01-01
The Man-machine Integration Design and Analysis System (MIDAS) is an integrated human performance modeling software tool that is based on mechanisms that underlie and cause human behavior. A PC-Windows version of MIDAS has been created that integrates the anthropometric character "Jack (TM)" with MIDAS' validated perceptual and attention mechanisms. MIDAS now models multiple simulated humans engaging in goal-related behaviors. New capabilities include the ability to predict situations in which errors and/or performance decrements are likely due to a variety of factors including concurrent workload and performance influencing factors (PIFs). This paper describes a new model that predicts the effects of microgravity on a mission specialist's performance, and its first application to simulating the task of conducting a Life Sciences experiment in space according to a sequential or parallel schedule of performance.
A probabilistic methodology for radar cross section prediction in conceptual aircraft design
NASA Astrophysics Data System (ADS)
Hines, Nathan Robert
System effectiveness has increasingly become the prime metric for the evaluation of military aircraft. As such, it is the decision maker's/designer's goal to maximize system effectiveness. Industry and government research documents indicate that all future military aircraft will incorporate signature reduction as an attempt to improve system effectiveness and reduce the cost of attrition. Today's operating environments demand low observable aircraft which are able to reliably take out valuable, time critical targets. Thus it is desirable to be able to design vehicles that are balanced for increased effectiveness. Previous studies have shown that shaping of the vehicle is one of the most important contributors to radar cross section, a measure of radar signature, and must be considered from the very beginning of the design process. Radar cross section estimation should be incorporated into conceptual design to develop more capable systems. This research strives to meet these needs by developing a conceptual design tool that predicts radar cross section for parametric geometries. This tool predicts the absolute radar cross section of the vehicle as well as the impact of geometry changes, allowing for the simultaneous tradeoff of the aerodynamic, performance, and cost characteristics of the vehicle with the radar cross section. Furthermore, this tool can be linked to a campaign theater analysis code to demonstrate the changes in system and system of system effectiveness due to changes in aircraft geometry. A general methodology was developed and implemented and sample computer codes applied to prototype the proposed process. Studies utilizing this radar cross section tool were subsequently performed to demonstrate the capabilities of this method and show the impact that various inputs have on the outputs of these models. The F/A-18 aircraft configuration was chosen as a case study vehicle to perform a design space exercise and to investigate the relative impact of shaping parameters on radar cross section. Finally, two unique low observable configurations were analyzed to examine the impact of shaping for stealthiness.
Donini, Lorenzo M; Poggiogalle, Eleonora; Molfino, Alessio; Rosano, Aldo; Lenzi, Andrea; Rossi Fanelli, Filippo; Muscaritoli, Maurizio
2016-10-01
Malnutrition plays a major role in clinical and functional impairment in older adults. The use of validated, user-friendly and rapid screening tools for malnutrition in the elderly may improve the diagnosis and, possibly, the prognosis. The aim of this study was to assess the agreement between Mini-Nutritional Assessment (MNA), considered as a reference tool, MNA short form (MNA-SF), Malnutrition Universal Screening Tool (MUST), and Nutrition Risk Screening (NRS-2002) in elderly institutionalized participants. Participants were enrolled among nursing home residents and underwent a multidimensional evaluation. Predictive value and survival analysis were performed to compare the nutritional classifications obtained from the different tools. A total of 246 participants (164 women, age: 82.3 ± 9 years, and 82 men, age: 76.5 ± 11 years) were enrolled. Based on MNA, 22.6% of females and 17% of males were classified as malnourished; 56.7% of women and 61% of men were at risk of malnutrition. Agreement between MNA and MUST or NRS-2002 was classified as "fair" (k = 0.270 and 0.291, respectively; P < .001), whereas the agreement between MNA and MNA-SF was classified as "moderate" (k = 0.588; P < .001). Because of the high percentage of false negative participants, MUST and NRS-2002 presented a low overall predictive value compared with MNA and MNA-SF. Clinical parameters were significantly different in false negative participants with MUST or NRS-2002 from true negative and true positive individuals using the reference tool. For all screening tools, there was a significant association between malnutrition and mortality. MNA showed the best predictive value for survival among well-nourished participants. Functional, psychological, and cognitive parameters, not considered in MUST and NRS-2002 tools, are probably more important risk factors for malnutrition than acute illness in geriatric long-term care inpatient settings and may account for the low predictive value of these tests. MNA-SF seems to combine the predictive capacity of the full version of the MNA with a sufficiently short time of administration. Copyright © 2016 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Determining GPS average performance metrics
NASA Technical Reports Server (NTRS)
Moore, G. V.
1995-01-01
Analytic and semi-analytic methods are used to show that users of the GPS constellation can expect performance variations based on their location. Specifically, performance is shown to be a function of both altitude and latitude. These results stem from the fact that the GPS constellation is itself non-uniform. For example, GPS satellites are over four times as likely to be directly over Tierra del Fuego than over Hawaii or Singapore. Inevitable performance variations due to user location occur for ground, sea, air and space GPS users. These performance variations can be studied in an average relative sense. A semi-analytic tool which symmetrically allocates GPS satellite latitude belt dwell times among longitude points is used to compute average performance metrics. These metrics include average number of GPS vehicles visible, relative average accuracies in the radial, intrack and crosstrack (or radial, north/south, east/west) directions, and relative average PDOP or GDOP. The tool can be quickly changed to incorporate various user antenna obscuration models and various GPS constellation designs. Among other applications, tool results can be used in studies to: predict locations and geometries of best/worst case performance, design GPS constellations, determine optimal user antenna location and understand performance trends among various users.
Phagonaute: A web-based interface for phage synteny browsing and protein function prediction.
Delattre, Hadrien; Souiai, Oussema; Fagoonee, Khema; Guerois, Raphaël; Petit, Marie-Agnès
2016-09-01
Distant homology search tools are of great help to predict viral protein functions. However, due to the lack of profile databases dedicated to viruses, they can lack sensitivity. We constructed HMM profiles for more than 80,000 proteins from both phages and archaeal viruses, and performed all pairwise comparisons with HHsearch program. The whole resulting database can be explored through a user-friendly "Phagonaute" interface to help predict functions. Results are displayed together with their genetic context, to strengthen inferences based on remote homology. Beyond function prediction, this tool permits detections of co-occurrences, often indicative of proteins completing a task together, and observation of conserved patterns across large evolutionary distances. As a test, Herpes simplex virus I was added to Phagonaute, and 25% of its proteome matched to bacterial or archaeal viral protein counterparts. Phagonaute should therefore help virologists in their quest for protein functions and evolutionary relationships. Copyright © 2016 Elsevier Inc. All rights reserved.
Using single leg standing time to predict the fall risk in elderly.
Chang, Chun-Ju; Chang, Yu-Shin; Yang, Sai-Wei
2013-01-01
In clinical evaluation, we used to evaluate the fall risk according to elderly falling experience or the balance assessment tool. Because of the tool limitation, sometimes we could not predict accurately. In this study, we first analyzed 15 healthy elderly (without falling experience) and 15 falling elderly (1~3 time falling experience) balance performance in previous research. After 1 year follow up, there was only 1 elderly fall down during this period. It seemed like that falling experience had a ceiling effect on the falling prediction. But we also found out that using single leg standing time could be more accurately to help predicting the fall risk, especially for the falling elderly who could not stand over 10 seconds by single leg, and with a significant correlation between the falling experience and single leg standing time (r = -0.474, p = 0.026). The results also showed that there was significant body sway just before they falling down, and the COP may be an important characteristic in the falling elderly group.
Improved Helicopter Rotor Performance Prediction through Loose and Tight CFD/CSD Coupling
NASA Astrophysics Data System (ADS)
Ickes, Jacob C.
Helicopters and other Vertical Take-Off or Landing (VTOL) vehicles exhibit an interesting combination of structural dynamic and aerodynamic phenomena which together drive the rotor performance. The combination of factors involved make simulating the rotor a challenging and multidisciplinary effort, and one which is still an active area of interest in the industry because of the money and time it could save during design. Modern tools allow the prediction of rotorcraft physics from first principles. Analysis of the rotor system with this level of accuracy provides the understanding necessary to improve its performance. There has historically been a divide between the comprehensive codes which perform aeroelastic rotor simulations using simplified aerodynamic models, and the very computationally intensive Navier-Stokes Computational Fluid Dynamics (CFD) solvers. As computer resources become more available, efforts have been made to replace the simplified aerodynamics of the comprehensive codes with the more accurate results from a CFD code. The objective of this work is to perform aeroelastic rotorcraft analysis using first-principles simulations for both fluids and structural predictions using tools available at the University of Toledo. Two separate codes are coupled together in both loose coupling (data exchange on a periodic interval) and tight coupling (data exchange each time step) schemes. To allow the coupling to be carried out in a reliable and efficient way, a Fluid-Structure Interaction code was developed which automatically performs primary functions of loose and tight coupling procedures. Flow phenomena such as transonics, dynamic stall, locally reversed flow on a blade, and Blade-Vortex Interaction (BVI) were simulated in this work. Results of the analysis show aerodynamic load improvement due to the inclusion of the CFD-based airloads in the structural dynamics analysis of the Computational Structural Dynamics (CSD) code. Improvements came in the form of improved peak/trough magnitude prediction, better phase prediction of these locations, and a predicted signal with a frequency content more like the flight test data than the CSD code acting alone. Additionally, a tight coupling analysis was performed as a demonstration of the capability and unique aspects of such an analysis. This work shows that away from the center of the flight envelope, the aerodynamic modeling of the CSD code can be replaced with a more accurate set of predictions from a CFD code with an improvement in the aerodynamic results. The better predictions come at substantially increased computational costs between 1,000 and 10,000 processor-hours.
Using the arthroscopic surgery skill evaluation tool as a pass-fail examination.
Koehler, Ryan J; Nicandri, Gregg T
2013-12-04
Examination of arthroscopic skill requires evaluation tools that are valid and reliable with clear criteria for passing. The Arthroscopic Surgery Skill Evaluation Tool was developed as a video-based assessment of technical skill with criteria for passing established by a panel of experts. The purpose of this study was to test the validity and reliability of the Arthroscopic Surgery Skill Evaluation Tool as a pass-fail examination of arthroscopic skill. Twenty-eight residents and two sports medicine faculty members were recorded performing diagnostic knee arthroscopy on a left and right cadaveric specimen in our arthroscopic skills laboratory. Procedure videos were evaluated with use of the Arthroscopic Surgery Skill Evaluation Tool by two raters blind to subject identity. Subjects were considered to pass the Arthroscopic Surgery Skill Evaluation Tool when they attained scores of ≥ 3 on all eight assessment domains. The raters agreed on a pass-fail rating for fifty-five of sixty videos rated with an interclass correlation coefficient value of 0.83. Ten of thirty participants were assigned passing scores by both raters for both diagnostic arthroscopies performed in the laboratory. Receiver operating characteristic analysis demonstrated that logging more than eighty arthroscopic cases or performing more than thirty-five arthroscopic knee cases was predictive of attaining a passing Arthroscopic Surgery Skill Evaluation Tool score on both procedures performed in the laboratory. The Arthroscopic Surgery Skill Evaluation Tool is valid and reliable as a pass-fail examination of diagnostic arthroscopy of the knee in the simulation laboratory. This study demonstrates that the Arthroscopic Surgery Skill Evaluation Tool may be a useful tool for pass-fail examination of diagnostic arthroscopy of the knee in the simulation laboratory. Further study is necessary to determine whether the Arthroscopic Surgery Skill Evaluation Tool can be used for the assessment of multiple arthroscopic procedures and whether it can be used to evaluate arthroscopic procedures performed in the operating room.
Cagiltay, Nergiz Ercil; Ozcelik, Erol; Sengul, Gokhan; Berker, Mustafa
2017-11-01
In neurosurgery education, there is a paradigm shift from time-based training to criterion-based model for which competency and assessment becomes very critical. Even virtual reality simulators provide alternatives to improve education and assessment in neurosurgery programs and allow for several objective assessment measures, there are not many tools for assessing the overall performance of trainees. This study aims to develop and validate a tool for assessing the overall performance of participants in a simulation-based endoneurosurgery training environment. A training program was developed in two levels: endoscopy practice and beginning surgical practice based on four scenarios. Then, three experiments were conducted with three corresponding groups of participants (Experiment 1, 45 (32 beginners, 13 experienced), Experiment 2, 53 (40 beginners, 13 experienced), and Experiment 3, 26 (14 novices, 12 intermediate) participants). The results analyzed to understand the common factors among the performance measurements of these experiments. Then, a factor capable of assessing the overall skill levels of surgical residents was extracted. Afterwards, the proposed measure was tested to estimate the experience levels of the participants. Finally, the level of realism of these educational scenarios was assessed. The factor formed by time, distance, and accuracy on simulated tasks provided an overall performance indicator. The prediction correctness was very high for the beginners than the one for experienced surgeons in Experiments 1 and 2. When non-dominant hand is used in a surgical procedure-based scenario, skill levels of surgeons can be better predicted. The results indicate that the scenarios in Experiments 1 and 2 can be used as an assessment tool for the beginners, and scenario-2 in Experiment 3 can be used as an assessment tool for intermediate and novice levels. It can be concluded that forming the balance between perceived action capacities and skills is critical for better designing and developing skill assessment surgical simulation tools.
NASA Astrophysics Data System (ADS)
Mey, Antonia S. J. S.; Jiménez, Jordi Juárez; Michel, Julien
2018-01-01
The Drug Design Data Resource (D3R) consortium organises blinded challenges to address the latest advances in computational methods for ligand pose prediction, affinity ranking, and free energy calculations. Within the context of the second D3R Grand Challenge several blinded binding free energies predictions were made for two congeneric series of Farsenoid X Receptor (FXR) inhibitors with a semi-automated alchemical free energy calculation workflow featuring FESetup and SOMD software tools. Reasonable performance was observed in retrospective analyses of literature datasets. Nevertheless, blinded predictions on the full D3R datasets were poor due to difficulties encountered with the ranking of compounds that vary in their net-charge. Performance increased for predictions that were restricted to subsets of compounds carrying the same net-charge. Disclosure of X-ray crystallography derived binding modes maintained or improved the correlation with experiment in a subsequent rounds of predictions. The best performing protocols on D3R set1 and set2 were comparable or superior to predictions made on the basis of analysis of literature structure activity relationships (SAR)s only, and comparable or slightly inferior, to the best submissions from other groups.
NASA Technical Reports Server (NTRS)
Wickens, Christopher; Sebok, Angelia; Keller, John; Peters, Steve; Small, Ronald; Hutchins, Shaun; Algarin, Liana; Gore, Brian Francis; Hooey, Becky Lee; Foyle, David C.
2013-01-01
NextGen operations are associated with a variety of changes to the national airspace system (NAS) including changes to the allocation of roles and responsibilities among operators and automation, the use of new technologies and automation, additional information presented on the flight deck, and the entire concept of operations (ConOps). In the transition to NextGen airspace, aviation and air operations designers need to consider the implications of design or system changes on human performance and the potential for error. To ensure continued safety of the NAS, it will be necessary for researchers to evaluate design concepts and potential NextGen scenarios well before implementation. One approach for such evaluations is through human performance modeling. Human performance models (HPMs) provide effective tools for predicting and evaluating operator performance in systems. HPMs offer significant advantages over empirical, human-in-the-loop testing in that (1) they allow detailed analyses of systems that have not yet been built, (2) they offer great flexibility for extensive data collection, (3) they do not require experimental participants, and thus can offer cost and time savings. HPMs differ in their ability to predict performance and safety with NextGen procedures, equipment and ConOps. Models also vary in terms of how they approach human performance (e.g., some focus on cognitive processing, others focus on discrete tasks performed by a human, while others consider perceptual processes), and in terms of their associated validation efforts. The objectives of this research effort were to support the Federal Aviation Administration (FAA) in identifying HPMs that are appropriate for predicting pilot performance in NextGen operations, to provide guidance on how to evaluate the quality of different models, and to identify gaps in pilot performance modeling research, that could guide future research opportunities. This research effort is intended to help the FAA evaluate pilot modeling efforts and select the appropriate tools for future modeling efforts to predict pilot performance in NextGen operations.
Synthetic biology: tools to design microbes for the production of chemicals and fuels.
Seo, Sang Woo; Yang, Jina; Min, Byung Eun; Jang, Sungho; Lim, Jae Hyung; Lim, Hyun Gyu; Kim, Seong Cheol; Kim, Se Yeon; Jeong, Jun Hong; Jung, Gyoo Yeol
2013-11-01
The engineering of biological systems to achieve specific purposes requires design tools that function in a predictable and quantitative manner. Recent advances in the field of synthetic biology, particularly in the programmable control of gene expression at multiple levels of regulation, have increased our ability to efficiently design and optimize biological systems to perform designed tasks. Furthermore, implementation of these designs in biological systems highlights the potential of using these tools to build microbial cell factories for the production of chemicals and fuels. In this paper, we review current developments in the design of tools for controlling gene expression at transcriptional, post-transcriptional and post-translational levels, and consider potential applications of these tools. Copyright © 2013 Elsevier Inc. All rights reserved.
Automated Concurrent Blackboard System Generation in C++
NASA Technical Reports Server (NTRS)
Kaplan, J. A.; McManus, J. W.; Bynum, W. L.
1999-01-01
In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.
DomSign: a top-down annotation pipeline to enlarge enzyme space in the protein universe.
Wang, Tianmin; Mori, Hiroshi; Zhang, Chong; Kurokawa, Ken; Xing, Xin-Hui; Yamada, Takuji
2015-03-21
Computational predictions of catalytic function are vital for in-depth understanding of enzymes. Because several novel approaches performing better than the common BLAST tool are rarely applied in research, we hypothesized that there is a large gap between the number of known annotated enzymes and the actual number in the protein universe, which significantly limits our ability to extract additional biologically relevant functional information from the available sequencing data. To reliably expand the enzyme space, we developed DomSign, a highly accurate domain signature-based enzyme functional prediction tool to assign Enzyme Commission (EC) digits. DomSign is a top-down prediction engine that yields results comparable, or superior, to those from many benchmark EC number prediction tools, including BLASTP, when a homolog with an identity >30% is not available in the database. Performance tests showed that DomSign is a highly reliable enzyme EC number annotation tool. After multiple tests, the accuracy is thought to be greater than 90%. Thus, DomSign can be applied to large-scale datasets, with the goal of expanding the enzyme space with high fidelity. Using DomSign, we successfully increased the percentage of EC-tagged enzymes from 12% to 30% in UniProt-TrEMBL. In the Kyoto Encyclopedia of Genes and Genomes bacterial database, the percentage of EC-tagged enzymes for each bacterial genome could be increased from 26.0% to 33.2% on average. Metagenomic mining was also efficient, as exemplified by the application of DomSign to the Human Microbiome Project dataset, recovering nearly one million new EC-labeled enzymes. Our results offer preliminarily confirmation of the existence of the hypothesized huge number of "hidden enzymes" in the protein universe, the identification of which could substantially further our understanding of the metabolisms of diverse organisms and also facilitate bioengineering by providing a richer enzyme resource. Furthermore, our results highlight the necessity of using more advanced computational tools than BLAST in protein database annotations to extract additional biologically relevant functional information from the available biological sequences.
The Launch Systems Operations Cost Model
NASA Technical Reports Server (NTRS)
Prince, Frank A.; Hamaker, Joseph W. (Technical Monitor)
2001-01-01
One of NASA's primary missions is to reduce the cost of access to space while simultaneously increasing safety. A key component, and one of the least understood, is the recurring operations and support cost for reusable launch systems. In order to predict these costs, NASA, under the leadership of the Independent Program Assessment Office (IPAO), has commissioned the development of a Launch Systems Operations Cost Model (LSOCM). LSOCM is a tool to predict the operations & support (O&S) cost of new and modified reusable (and partially reusable) launch systems. The requirements are to predict the non-recurring cost for the ground infrastructure and the recurring cost of maintaining that infrastructure, performing vehicle logistics, and performing the O&S actions to return the vehicle to flight. In addition, the model must estimate the time required to cycle the vehicle through all of the ground processing activities. The current version of LSOCM is an amalgamation of existing tools, leveraging our understanding of shuttle operations cost with a means of predicting how the maintenance burden will change as the vehicle becomes more aircraft like. The use of the Conceptual Operations Manpower Estimating Tool/Operations Cost Model (COMET/OCM) provides a solid point of departure based on shuttle and expendable launch vehicle (ELV) experience. The incorporation of the Reliability and Maintainability Analysis Tool (RMAT) as expressed by a set of response surface model equations gives a method for estimating how changing launch system characteristics affects cost and cycle time as compared to today's shuttle system. Plans are being made to improve the model. The development team will be spending the next few months devising a structured methodology that will enable verified and validated algorithms to give accurate cost estimates. To assist in this endeavor the LSOCM team is part of an Agency wide effort to combine resources with other cost and operations professionals to support models, databases, and operations assessments.
NASA Astrophysics Data System (ADS)
Holoien, Thomas W.-S.; Marshall, Philip J.; Wechsler, Risa H.
2017-06-01
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of a subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.
Battery Lifetime Analysis and Simulation Tool (BLAST) Documentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neubauer, J.
2014-12-01
The deployment and use of lithium-ion (Li-ion) batteries in automotive and stationary energy storage applications must be optimized to justify their high up-front costs. Given that batteries degrade with use and storage, such optimizations must evaluate many years of operation. As the degradation mechanisms are sensitive to temperature, state-of-charge (SOC) histories, current levels, and cycle depth and frequency, it is important to model both the battery and the application to a high level of detail to ensure battery response is accurately predicted. To address these issues, the National Renewable Energy Laboratory (NREL) has developed the Battery Lifetime Analysis and Simulationmore » Tool (BLAST) suite. This suite of tools pairs NREL’s high-fidelity battery degradation model with a battery electrical and thermal performance model, application-specific electrical and thermal performance models of the larger system (e.g., an electric vehicle), application-specific system use data (e.g., vehicle travel patterns and driving data), and historic climate data from cities across the United States. This provides highly realistic long-term predictions of battery response and thereby enables quantitative comparisons of varied battery use strategies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holoien, Thomas W. -S.; Marshall, Philip J.; Wechsler, Risa H.
We describe two new open-source tools written in Python for performing extreme deconvolution Gaussian mixture modeling (XDGMM) and using a conditioned model to re-sample observed supernova and host galaxy populations. XDGMM is new program that uses Gaussian mixtures to perform density estimation of noisy data using extreme deconvolution (XD) algorithms. Additionally, it has functionality not available in other XD tools. It allows the user to select between the AstroML and Bovy et al. fitting methods and is compatible with scikit-learn machine learning algorithms. Most crucially, it allows the user to condition a model based on the known values of amore » subset of parameters. This gives the user the ability to produce a tool that can predict unknown parameters based on a model that is conditioned on known values of other parameters. EmpiriciSN is an exemplary application of this functionality, which can be used to fit an XDGMM model to observed supernova/host data sets and predict likely supernova parameters using a model conditioned on observed host properties. It is primarily intended to simulate realistic supernovae for LSST data simulations based on empirical galaxy properties.« less
Analysis of LH Launcher Arrays (Like the ITER One) Using the TOPLHA Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maggiora, R.; Milanesio, D.; Vecchi, G.
2009-11-26
TOPLHA (Torino Polytechnic Lower Hybrid Antenna) code is an innovative tool for the 3D/1D simulation of Lower Hybrid (LH) antennas, i.e. accounting for realistic 3D waveguides geometry and for accurate 1D plasma models, and without restrictions on waveguide shape, including curvature. This tool provides a detailed performances prediction of any LH launcher, by computing the antenna scattering parameters, the current distribution, electric field maps and power spectra for any user-specified waveguide excitation. In addition, a fully parallelized and multi-cavity version of TOPLHA permits the analysis of large and complex waveguide arrays in a reasonable simulation time. A detailed analysis ofmore » the performances of the proposed ITER LH antenna geometry has been carried out, underlining the strong dependence of the antenna input parameters with respect to plasma conditions. A preliminary optimization of the antenna dimensions has also been accomplished. Electric current distribution on conductors, electric field distribution at the interface with plasma, and power spectra have been calculated as well. The analysis shows the strong capabilities of the TOPLHA code as a predictive tool and its usefulness to LH launcher arrays detailed design.« less
Development and application of incrementally complex tools for wind turbine aerodynamics
NASA Astrophysics Data System (ADS)
Gundling, Christopher H.
Advances and availability of computational resources have made wind farm design using simulation tools a reality. Wind farms are battling two issues, affecting the cost of energy, that will make or break many future investments in wind energy. The most significant issue is the power reduction of downstream turbines operating in the wake of upstream turbines. The loss of energy from wind turbine wakes is difficult to predict and the underestimation of energy losses due to wakes has been a common problem throughout the industry. The second issue is a shorter lifetime of blades and past failures of gearboxes due to increased fluctuations in the unsteady loading of waked turbines. The overall goal of this research is to address these problems by developing a platform for a multi-fidelity wind turbine aerodynamic performance and wake prediction tool. Full-scale experiments in the field have dramatically helped researchers understand the unique issues inside a large wind farm, but experimental methods can only be used to a limited extent due to the cost of such field studies and the size of wind farms. The uncertainty of the inflow is another inherent drawback of field experiments. Therefore, computational fluid dynamics (CFD) predictions, strategically validated using carefully performed wind farm field campaigns, are becoming a more standard design practice. The developed CFD models include a blade element model (BEM) code with a free-vortex wake, an actuator disk or line based method with large eddy simulations (LES) and a fully resolved rotor based method with detached eddy simulations (DES) and adaptive mesh refinement (AMR). To create more realistic simulations, performance of a one-way coupling between different mesoscale atmospheric boundary layer (ABL) models and the three microscale CFD solvers is tested. These methods are validated using data from incrementally complex test cases that include the NREL Phase VI wind tunnel test, the Sexbierum wind farm and the Lillgrund offshore wind farm. By cross-comparing the lowest complexity free-vortex method with the higher complexity methods, a fast and accurate simulation tool has been generated that can perform wind farm simulations in a few hours.
Singh, Urminder; Rajkumar, Mohan Singh; Garg, Rohini
2017-01-01
Abstract Long non-coding RNAs (lncRNAs) make up a significant portion of non-coding RNAs and are involved in a variety of biological processes. Accurate identification/annotation of lncRNAs is the primary step for gaining deeper insights into their functions. In this study, we report a novel tool, PLncPRO, for prediction of lncRNAs in plants using transcriptome data. PLncPRO is based on machine learning and uses random forest algorithm to classify coding and long non-coding transcripts. PLncPRO has better prediction accuracy as compared to other existing tools and is particularly well-suited for plants. We developed consensus models for dicots and monocots to facilitate prediction of lncRNAs in non-model/orphan plants. The performance of PLncPRO was quite better with vertebrate transcriptome data as well. Using PLncPRO, we discovered 3714 and 3457 high-confidence lncRNAs in rice and chickpea, respectively, under drought or salinity stress conditions. We investigated different characteristics and differential expression under drought/salinity stress conditions, and validated lncRNAs via RT-qPCR. Overall, we developed a new tool for the prediction of lncRNAs in plants and showed its utility via identification of lncRNAs in rice and chickpea. PMID:29036354
The Use of a Block Diagram Simulation Language for Rapid Model Prototyping
NASA Technical Reports Server (NTRS)
Whitlow, Johnathan E.; Engrand, Peter
1996-01-01
The research performed this summer was a continuation of work performed during the 1995 NASA/ASEE Summer Fellowship. The focus of the work was to expand previously generated predictive models for liquid oxygen (LOX) loading into the external fuel tank of the shuttle. The models which were developed using a block diagram simulation language known as VisSim, were evaluated on numerous shuttle flights and found to well in most cases. Once the models were refined and validated, the predictive methods were integrated into the existing Rockwell software propulsion advisory tool (PAT). Although time was not sufficient to completely integrate the models developed into PAT, the ability to predict flows and pressures in the orbiter section and graphically display the results was accomplished.
NASA Astrophysics Data System (ADS)
Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.
2014-10-01
Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.
NASA Astrophysics Data System (ADS)
Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusoff, Z. M.; Tehrany, M. S.
2015-03-01
Modelling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modelling. Bivariate statistical analysis (BSA) assists in hazard modelling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time-consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, bivariate statistical modeler (BSM), for BSA technique is proposed. Three popular BSA techniques, such as frequency ratio, weight-of-evidence (WoE), and evidential belief function (EBF) models, are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and created by a simple graphical user interface (GUI), which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve (AUC) is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.
Application of linear regression analysis in accuracy assessment of rolling force calculations
NASA Astrophysics Data System (ADS)
Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.
1998-10-01
Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.
Control and prediction of the course of brewery fermentations by gravimetric analysis.
Kosín, P; Savel, J; Broz, A; Sigler, K
2008-01-01
A simple, fast and cheap test suitable for predicting the course of brewery fermentations based on mass analysis is described and its efficiency is evaluated. Compared to commonly used yeast vitality tests, this analysis takes into account wort composition and other factors that influence fermentation performance. It can be used to predict the shape of the fermentation curve in brewery fermentations and in research and development projects concerning yeast vitality, fermentation conditions and wort composition. It can also be a useful tool for homebrewers to control their fermentations.
Investigation of computational aeroacoustic tools for noise predictions of wind turbine aerofoils
NASA Astrophysics Data System (ADS)
Humpf, A.; Ferrer, E.; Munduate, X.
2007-07-01
In this work trailing edge noise levels of a research aerofoil have been computed and compared to aeroacoustic measurements using two different approaches. On the other hand, aerodynamic and aeroacoustic calculations were performed with the full Navier-Stokes CFD code Fluent [Fluent Inc 2005 Fluent 6.2 Users Guide, Lebanon, NH, USA] on the basis of a steady RANS simulation. Aerodynamic characteristics were computed by the aid of various turbulence models. By the combined usage of implemented broadband noise source models, it was tried to isolate and determine the trailing edge noise level. Throughout this work two methods of different computational cost have been tested and quantitative and qualitative results obtained. On the one hand, the semi-empirical noise prediction tool NAFNoise [Moriarty P 2005 NAFNoise User's Guide. Golden, Colorado, July. http://wind.nrel.gov/designcodes/ simulators/NAFNoise] was used to directly predict trailing edge noise by taking into consideration the nature of the experiments.
Oguz, Cihan; Sen, Shurjo K; Davis, Adam R; Fu, Yi-Ping; O'Donnell, Christopher J; Gibbons, Gary H
2017-10-26
One goal of personalized medicine is leveraging the emerging tools of data science to guide medical decision-making. Achieving this using disparate data sources is most daunting for polygenic traits. To this end, we employed random forests (RFs) and neural networks (NNs) for predictive modeling of coronary artery calcium (CAC), which is an intermediate endo-phenotype of coronary artery disease (CAD). Model inputs were derived from advanced cases in the ClinSeq®; discovery cohort (n=16) and the FHS replication cohort (n=36) from 89 th -99 th CAC score percentile range, and age-matched controls (ClinSeq®; n=16, FHS n=36) with no detectable CAC (all subjects were Caucasian males). These inputs included clinical variables and genotypes of 56 single nucleotide polymorphisms (SNPs) ranked highest in terms of their nominal correlation with the advanced CAC state in the discovery cohort. Predictive performance was assessed by computing the areas under receiver operating characteristic curves (ROC-AUC). RF models trained and tested with clinical variables generated ROC-AUC values of 0.69 and 0.61 in the discovery and replication cohorts, respectively. In contrast, in both cohorts, the set of SNPs derived from the discovery cohort were highly predictive (ROC-AUC ≥0.85) with no significant change in predictive performance upon integration of clinical and genotype variables. Using the 21 SNPs that produced optimal predictive performance in both cohorts, we developed NN models trained with ClinSeq®; data and tested with FHS data and obtained high predictive accuracy (ROC-AUC=0.80-0.85) with several topologies. Several CAD and "vascular aging" related biological processes were enriched in the network of genes constructed from the predictive SNPs. We identified a molecular network predictive of advanced coronary calcium using genotype data from ClinSeq®; and FHS cohorts. Our results illustrate that machine learning tools, which utilize complex interactions between disease predictors intrinsic to the pathogenesis of polygenic disorders, hold promise for deriving predictive disease models and networks.
The Durham Adaptive Optics Simulation Platform (DASP): Current status
NASA Astrophysics Data System (ADS)
Basden, A. G.; Bharmal, N. A.; Jenkins, D.; Morris, T. J.; Osborn, J.; Peng, J.; Staykov, L.
2018-01-01
The Durham Adaptive Optics Simulation Platform (DASP) is a Monte-Carlo modelling tool used for the simulation of astronomical and solar adaptive optics systems. In recent years, this tool has been used to predict the expected performance of the forthcoming extremely large telescope adaptive optics systems, and has seen the addition of several modules with new features, including Fresnel optics propagation and extended object wavefront sensing. Here, we provide an overview of the features of DASP and the situations in which it can be used. Additionally, the user tools for configuration and control are described.
Prediction of muscle performance during dynamic repetitive movement
NASA Technical Reports Server (NTRS)
Byerly, D. L.; Byerly, K. A.; Sognier, M. A.; Squires, W. G.
2003-01-01
BACKGROUND: During long-duration spaceflight, astronauts experience progressive muscle atrophy and often perform strenuous extravehicular activities. Post-flight, there is a lengthy recovery period with an increased risk for injury. Currently, there is a critical need for an enabling tool to optimize muscle performance and to minimize the risk of injury to astronauts while on-orbit and during post-flight recovery. Consequently, these studies were performed to develop a method to address this need. METHODS: Eight test subjects performed a repetitive dynamic exercise to failure at 65% of their upper torso weight using a Lordex spinal machine. Surface electromyography (SEMG) data was collected from the erector spinae back muscle. The SEMG data was evaluated using a 5th order autoregressive (AR) model and linear regression analysis. RESULTS: The best predictor found was an AR parameter, the mean average magnitude of AR poles, with r = 0.75 and p = 0.03. This parameter can predict performance to failure as early as the second repetition of the exercise. CONCLUSION: A method for predicting human muscle performance early during dynamic repetitive exercise was developed. The capability to predict performance to failure has many potential applications to the space program including evaluating countermeasure effectiveness on-orbit, optimizing post-flight recovery, and potential future real-time monitoring capability during extravehicular activity.
A thermal sensation prediction tool for use by the profession
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fountain, M.E.; Huizenga, C.
1997-12-31
As part of a recent ASHRAE research project (781-RP), a thermal sensation prediction tool has been developed. This paper introduces the tool, describes the component thermal sensation models, and presents examples of how the tool can be used in practice. Since the main end product of the HVAC industry is the comfort of occupants indoors, tools for predicting occupant thermal response can be an important asset to designers of indoor climate control systems. The software tool presented in this paper incorporates several existing models for predicting occupant comfort.
A computer simulation of an adaptive noise canceler with a single input
NASA Astrophysics Data System (ADS)
Albert, Stuart D.
1991-06-01
A description of an adaptive noise canceler using Widrows' LMS algorithm is presented. A computer simulation of canceler performance (adaptive convergence time and frequency transfer function) was written for use as a design tool. The simulations, assumptions, and input parameters are described in detail. The simulation is used in a design example to predict the performance of an adaptive noise canceler in the simultaneous presence of both strong and weak narrow-band signals (a cosited frequency hopping radio scenario). On the basis of the simulation results, it is concluded that the simulation is suitable for use as an adaptive noise canceler design tool; i.e., it can be used to evaluate the effect of design parameter changes on canceler performance.
Objective Analysis and Prediction Techniques.
1986-11-30
contract work performance period extended from November 25, 1981 to November 24, 1986. This report consists of two parts: Part One details the results and...be added to the ELAN to make it a truly effective research tool. Also, muach more testing and streamlining should be performed to insure that Its...before performing some kind of matching. Classification of Lhe data in this manner reduces the number of data points with which we need to work from
2010-11-01
such as pay increases, promotions, increases in leader- ship responsibility, leadership performance /behavior ratings, and satisfaction at work. The... Vroom , 1964), self-efficacy (or expectation that one will succeed at a task) is a sub-component or direct predictor of overall motivation to perform a... satisfaction . Comparing Motivation to Lead and Motivation to Develop Leadership in Predicting Leadership Performance and Career Success In this
Muratov, Eugene; Lewis, Margaret; Fourches, Denis; Tropsha, Alexander; Cox, Wendy C
2017-04-01
Objective. To develop predictive computational models forecasting the academic performance of students in the didactic-rich portion of a doctor of pharmacy (PharmD) curriculum as admission-assisting tools. Methods. All PharmD candidates over three admission cycles were divided into two groups: those who completed the PharmD program with a GPA ≥ 3; and the remaining candidates. Random Forest machine learning technique was used to develop a binary classification model based on 11 pre-admission parameters. Results. Robust and externally predictive models were developed that had particularly high overall accuracy of 77% for candidates with high or low academic performance. These multivariate models were highly accurate in predicting these groups to those obtained using undergraduate GPA and composite PCAT scores only. Conclusion. The models developed in this study can be used to improve the admission process as preliminary filters and thus quickly identify candidates who are likely to be successful in the PharmD curriculum.
Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H
2018-05-02
A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.
Thakur, Shalabh; Guttman, David S
2016-06-30
Comparative analysis of whole genome sequence data from closely related prokaryotic species or strains is becoming an increasingly important and accessible approach for addressing both fundamental and applied biological questions. While there are number of excellent tools developed for performing this task, most scale poorly when faced with hundreds of genome sequences, and many require extensive manual curation. We have developed a de-novo genome analysis pipeline (DeNoGAP) for the automated, iterative and high-throughput analysis of data from comparative genomics projects involving hundreds of whole genome sequences. The pipeline is designed to perform reference-assisted and de novo gene prediction, homolog protein family assignment, ortholog prediction, functional annotation, and pan-genome analysis using a range of proven tools and databases. While most existing methods scale quadratically with the number of genomes since they rely on pairwise comparisons among predicted protein sequences, DeNoGAP scales linearly since the homology assignment is based on iteratively refined hidden Markov models. This iterative clustering strategy enables DeNoGAP to handle a very large number of genomes using minimal computational resources. Moreover, the modular structure of the pipeline permits easy updates as new analysis programs become available. DeNoGAP integrates bioinformatics tools and databases for comparative analysis of a large number of genomes. The pipeline offers tools and algorithms for annotation and analysis of completed and draft genome sequences. The pipeline is developed using Perl, BioPerl and SQLite on Ubuntu Linux version 12.04 LTS. Currently, the software package accompanies script for automated installation of necessary external programs on Ubuntu Linux; however, the pipeline should be also compatible with other Linux and Unix systems after necessary external programs are installed. DeNoGAP is freely available at https://sourceforge.net/projects/denogap/ .
Does a selection interview predict year 1 performance in dental school?
McAndrew, R; Ellis, J; Valentine, R A
2017-05-01
It is important for dental schools to select students who will complete their degree and progress on to become the dentists of the future. The process should be transparent, fair and ethical and utilise selection tools that select appropriate students. The interview is an integral part of UK dental schools student selection procedures. This study was undertaken in order to determine whether different interview methods (Cardiff with a multiple mini interview and Newcastle with a more traditional interview process) along with other components used in selection predicted academic performance in students. The admissions selection data for two dental schools (Cardiff and Newcastle) were collected and analysed alongside student performance in academic examinations in Year 1 of the respective schools. Correlation statistics were used to determine whether selection tools had any relevance to academic performance once students were admitted to their respective Universities. Data was available for a total of 177 students (77 Cardiff and 100 Newcastle). Examination performance did not correlate with admission interview scores at either school; however UKCAT score was linked to poor academic performance. Although interview methodology does not appear to correlate with academic performance it remains an integral and very necessary part of the admissions process. Ultimately schools need to be comfortable with their admissions procedures in attracting and selecting the calibre of students they desire. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Li, Ginny X H; Vogel, Christine; Choi, Hyungwon
2018-06-07
While tandem mass spectrometry can detect post-translational modifications (PTM) at the proteome scale, reported PTM sites are often incomplete and include false positives. Computational approaches can complement these datasets by additional predictions, but most available tools use prediction models pre-trained for single PTM type by the developers and it remains a difficult task to perform large-scale batch prediction for multiple PTMs with flexible user control, including the choice of training data. We developed an R package called PTMscape which predicts PTM sites across the proteome based on a unified and comprehensive set of descriptors of the physico-chemical microenvironment of modified sites, with additional downstream analysis modules to test enrichment of individual or pairs of PTMs in protein domains. PTMscape is flexible in the ability to process any major modifications, such as phosphorylation and ubiquitination, while achieving the sensitivity and specificity comparable to single-PTM methods and outperforming other multi-PTM tools. Applying this framework, we expanded proteome-wide coverage of five major PTMs affecting different residues by prediction, especially for lysine and arginine modifications. Using a combination of experimentally acquired sites (PSP) and newly predicted sites, we discovered that the crosstalk among multiple PTMs occur more frequently than by random chance in key protein domains such as histone, protein kinase, and RNA recognition motifs, spanning various biological processes such as RNA processing, DNA damage response, signal transduction, and regulation of cell cycle. These results provide a proteome-scale analysis of crosstalk among major PTMs and can be easily extended to other types of PTM.
Aziz, Michael
2015-01-01
Recent technological advances have made airway management safer. Because difficult intubation remains challenging to predict, having tools readily available that can be used to manage a difficult airway in any setting is critical. Fortunately, video technology has resulted in improvements for intubation performance while using laryngoscopy by various means. These technologies have been applied to rigid optical stylets, flexible intubation scopes, and, most notably, rigid laryngoscopes. These tools have proven effective for the anticipated difficult airway as well as the unanticipated difficult airway.
2017-04-01
A COMPARISON OF PREDICTIVE THERMO AND WATER SOLVATION PROPERTY PREDICTION TOOLS AND EXPERIMENTAL DATA FOR...4. TITLE AND SUBTITLE A Comparison of Predictive Thermo and Water Solvation Property Prediction Tools and Experimental Data for Selected...1 2. EXPERIMENTAL PROCEDURE
A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine
Ramakrishnan, Sridhar; Wesensten, Nancy J.; Kamimori, Gary H.; Moon, James E.; Balkin, Thomas J.; Reifman, Jaques
2016-01-01
Study Objectives: Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. Methods: We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). Results: The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% < error < 27%), yielding greater accuracy for mild and moderate sleep loss conditions than for more severe cases. Overall, accounting for the effects of caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. Conclusions: The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. Citation: Ramakrishnan S, Wesensten NJ, Kamimori GH, Moon JE, Balkin TJ, Reifman J. A unified model of performance for predicting the effects of sleep and caffeine. SLEEP 2016;39(10):1827–1841. PMID:27397562
Peng, Hui; Zheng, Yi; Blumenstein, Michael; Tao, Dacheng; Li, Jinyan
2018-04-16
CRISPR/Cas9 system is a widely used genome editing tool. A prediction problem of great interests for this system is: how to select optimal single guide RNAs (sgRNAs) such that its cleavage efficiency is high meanwhile the off-target effect is low. This work proposed a two-step averaging method (TSAM) for the regression of cleavage efficiencies of a set of sgRNAs by averaging the predicted efficiency scores of a boosting algorithm and those by a support vector machine (SVM).We also proposed to use profiled Markov properties as novel features to capture the global characteristics of sgRNAs. These new features are combined with the outstanding features ranked by the boosting algorithm for the training of the SVM regressor. TSAM improved the mean Spearman correlation coefficiencies comparing with the state-of-the-art performance on benchmark datasets containing thousands of human, mouse and zebrafish sgRNAs. Our method can be also converted to make binary distinctions between efficient and inefficient sgRNAs with superior performance to the existing methods. The analysis reveals that highly efficient sgRNAs have lower melting temperature at the middle of the spacer, cut at 5'-end closer parts of the genome and contain more 'A' but less 'G' comparing with inefficient ones. Comprehensive further analysis also demonstrates that our tool can predict an sgRNA's cutting efficiency with consistently good performance no matter it is expressed from an U6 promoter in cells or from a T7 promoter in vitro. Online tool is available at http://www.aai-bioinfo.com/CRISPR/. Python and Matlab source codes are freely available at https://github.com/penn-hui/TSAM. Jinyan.Li@uts.edu.au. Supplementary data are available at Bioinformatics online.
Modeling of the UAE Wind Turbine for Refinement of FAST{_}AD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonkman, J. M.
The Unsteady Aerodynamics Experiment (UAE) research wind turbine was modeled both aerodynamically and structurally in the FAST{_}AD wind turbine design code, and its response to wind inflows was simulated for a sample of test cases. A study was conducted to determine why wind turbine load magnitude discrepancies-inconsistencies in aerodynamic force coefficients, rotor shaft torque, and out-of-plane bending moments at the blade root across a range of operating conditions-exist between load predictions made by FAST{_}AD and other modeling tools and measured loads taken from the actual UAE wind turbine during the NASA-Ames wind tunnel tests. The acquired experimental test data representmore » the finest, most accurate set of wind turbine aerodynamic and induced flow field data available today. A sample of the FAST{_}AD model input parameters most critical to the aerodynamics computations was also systematically perturbed to determine their effect on load and performance predictions. Attention was focused on the simpler upwind rotor configuration, zero yaw error test cases. Inconsistencies in input file parameters, such as aerodynamic performance characteristics, explain a noteworthy fraction of the load prediction discrepancies of the various modeling tools.« less
Microstructure Modeling of 3rd Generation Disk Alloy
NASA Technical Reports Server (NTRS)
Jou, Herng-Jeng
2008-01-01
The objective of this initiative, funded by NASA's Aviation Safety Program, is to model, validate, and predict, with high fidelity, the microstructural evolution of third-generation high-refractory Ni-based disc superalloys during heat treating and service conditions. This initiative is a natural extension of the DARPA-AIM (Accelerated Insertion of Materials) initiative with GE/Pratt-Whitney and with other process simulation tools. Strong collaboration with the NASA Glenn Research Center (GRC) is a key component of this initiative and the focus of this program is on industrially relevant disk alloys and heat treatment processes identified by GRC. Employing QuesTek s Computational Materials Dynamics technology and PrecipiCalc precipitation simulator, physics-based models are being used to achieve high predictive accuracy and precision. Combining these models with experimental data and probabilistic analysis, "virtual alloy design" can be performed. The predicted microstructures can be optimized to promote desirable features and concurrently eliminate nondesirable phases that can limit the reliability and durability of the alloys. The well-calibrated and well-integrated software tools that are being applied under the proposed program will help gas turbine disk alloy manufacturers, processing facilities, and NASA, to efficiently and effectively improve the performance of current and future disk materials.
On Bi-Grid Local Mode Analysis of Solution Techniques for 3-D Euler and Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Ibraheem, S. O.; Demuren, A. O.
1994-01-01
A procedure is presented for utilizing a bi-grid stability analysis as a practical tool for predicting multigrid performance in a range of numerical methods for solving Euler and Navier-Stokes equations. Model problems based on the convection, diffusion and Burger's equation are used to illustrate the superiority of the bi-grid analysis as a predictive tool for multigrid performance in comparison to the smoothing factor derived from conventional von Neumann analysis. For the Euler equations, bi-grid analysis is presented for three upwind difference based factorizations, namely Spatial, Eigenvalue and Combination splits, and two central difference based factorizations, namely LU and ADI methods. In the former, both the Steger-Warming and van Leer flux-vector splitting methods are considered. For the Navier-Stokes equations, only the Beam-Warming (ADI) central difference scheme is considered. In each case, estimates of multigrid convergence rates from the bi-grid analysis are compared to smoothing factors obtained from single-grid stability analysis. Effects of grid aspect ratio and flow skewness are examined. Both predictions are compared with practical multigrid convergence rates for 2-D Euler and Navier-Stokes solutions based on the Beam-Warming central scheme.
Olives, Casey; Pagano, Marcello
2013-02-01
Lot Quality Assurance Sampling (LQAS) is a provably useful tool for monitoring health programmes. Although LQAS ensures acceptable Producer and Consumer risks, the literature alleges that the method suffers from poor specificity and positive predictive values (PPVs). We suggest that poor LQAS performance is due, in part, to variation in the true underlying distribution. However, until now the role of the underlying distribution in expected performance has not been adequately examined. We present Bayesian-LQAS (B-LQAS), an approach to incorporating prior information into the choice of the LQAS sample size and decision rule, and explore its properties through a numerical study. Additionally, we analyse vaccination coverage data from UNICEF's State of the World's Children in 1968-1989 and 2008 to exemplify the performance of LQAS and B-LQAS. Results of our numerical study show that the choice of LQAS sample size and decision rule is sensitive to the distribution of prior information, as well as to individual beliefs about the importance of correct classification. Application of the B-LQAS approach to the UNICEF data improves specificity and PPV in both time periods (1968-1989 and 2008) with minimal reductions in sensitivity and negative predictive value. LQAS is shown to be a robust tool that is not necessarily prone to poor specificity and PPV as previously alleged. In situations where prior or historical data are available, B-LQAS can lead to improvements in expected performance.
Lindahl, Jonas; Danell, Rickard
The aim of this study was to provide a framework to evaluate bibliometric indicators as decision support tools from a decision making perspective and to examine the information value of early career publication rate as a predictor of future productivity. We used ROC analysis to evaluate a bibliometric indicator as a tool for binary decision making. The dataset consisted of 451 early career researchers in the mathematical sub-field of number theory. We investigated the effect of three different definitions of top performance groups-top 10, top 25, and top 50 %; the consequences of using different thresholds in the prediction models; and the added prediction value of information on early career research collaboration and publications in prestige journals. We conclude that early career performance productivity has an information value in all tested decision scenarios, but future performance is more predictable if the definition of a high performance group is more exclusive. Estimated optimal decision thresholds using the Youden index indicated that the top 10 % decision scenario should use 7 articles, the top 25 % scenario should use 7 articles, and the top 50 % should use 5 articles to minimize prediction errors. A comparative analysis between the decision thresholds provided by the Youden index which take consequences into consideration and a method commonly used in evaluative bibliometrics which do not take consequences into consideration when determining decision thresholds, indicated that differences are trivial for the top 25 and the 50 % groups. However, a statistically significant difference between the methods was found for the top 10 % group. Information on early career collaboration and publication strategies did not add any prediction value to the bibliometric indicator publication rate in any of the models. The key contributions of this research is the focus on consequences in terms of prediction errors and the notion of transforming uncertainty into risk when we are choosing decision thresholds in bibliometricly informed decision making. The significance of our results are discussed from the point of view of a science policy and management.
Modeling of fiber orientation in viscous fluid flow with application to self-compacting concrete
NASA Astrophysics Data System (ADS)
Kolařík, Filip; Patzák, Bořek
2013-10-01
In recent years, unconventional concrete reinforcement is of growing popularity. Especially fiber reinforcement has very wide usage in high performance concretes like "Self Compacting Concrete" (SCC). The design of advanced tailor-made structures made of SCC can take advantage of anisotropic orientation of fibers. Tools for fiber orientation predictions can contribute to design of tailor made structure and allow to develop casting procedures that enable to achieve the desired fiber distribution and orientation. This paper deals with development and implementation of suitable tool for prediction of fiber orientation in a fluid based on the knowledge of the velocity field. Statistical approach to the topic is employed. Fiber orientation is described by a probability distribution of the fiber angle.
Development and evaluation of the Screening Trajectory Ozone Prediction System (STOPS, version 1.0)
NASA Astrophysics Data System (ADS)
Czader, B. H.; Percell, P.; Byun, D.; Choi, Y.
2014-11-01
A hybrid Lagrangian-Eulerian modeling tool has been developed using the Eulerian framework of the Community Multiscale Air Quality (CMAQ) model. It is a moving nest that utilizes saved original CMAQ simulation results to provide boundary conditions, initial conditions, as well as emissions and meteorological parameters necessary for a simulation. Given that these file are available, this tool can run independently from the CMAQ whole domain simulation and it is designed to simulate source - receptor relationship upon changes in emissions. In this tool, the original CMAQ's horizontal domain is reduced to a small sub-domain that follows a trajectory defined by the mean mixed-layer wind. It has the same vertical structure and physical and chemical interactions as CMAQ except advection calculation. The advantage of this tool compared to other Lagrangian models is its capability of utilizing realistic boundary conditions that change with space and time as well as detailed chemistry treatment. The correctness of the algorithms and the overall performance was evaluated against CMAQ simulation results. Its performance depends on the atmospheric conditions occurring during the simulation period with the comparisons being most similar to CMAQ results under uniform wind conditions. The mean bias varies between -0.03 and -0.78 and the slope is between 0.99 and 1.01 for different analyzed cases. For complicated meteorological condition, such as wind circulation, the simulated mixing ratios deviate from CMAQ values as a result of Lagrangian approach of using mean wind for its movement, but are still close, with the mean varying between 0.07 and -4.29 and slope varying between 0.95 and 1.063 for different analyzed cases. For historical reasons this hybrid Lagrangian - Eulerian tool is named the Screening Trajectory Ozone Prediction System (STOPS) but its use is not limited to ozone prediction as similarly to CMAQ it can simulate concentrations of many species, including particulate matter and some toxic compounds, such as formaldehyde and 1,3-butadiene.
NASA Astrophysics Data System (ADS)
Nieto, Paulino José García; García-Gonzalo, Esperanza; Vilán, José Antonio Vilán; Robleda, Abraham Segade
2015-12-01
The main aim of this research work is to build a new practical hybrid regression model to predict the milling tool wear in a regular cut as well as entry cut and exit cut of a milling tool. The model was based on Particle Swarm Optimization (PSO) in combination with support vector machines (SVMs). This optimization mechanism involved kernel parameter setting in the SVM training procedure, which significantly influences the regression accuracy. Bearing this in mind, a PSO-SVM-based model, which is based on the statistical learning theory, was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. To accomplish the objective of this study, the experimental dataset represents experiments from runs on a milling machine under various operating conditions. In this way, data sampled by three different types of sensors (acoustic emission sensor, vibration sensor and current sensor) were acquired at several positions. A second aim is to determine the factors with the greatest bearing on the milling tool flank wear with a view to proposing milling machine's improvements. Firstly, this hybrid PSO-SVM-based regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the flank wear (output variable) and input variables (time, depth of cut, feed, etc.). Indeed, regression with optimal hyperparameters was performed and a determination coefficient of 0.95 was obtained. The agreement of this model with experimental data confirmed its good performance. Secondly, the main advantages of this PSO-SVM-based model are its capacity to produce a simple, easy-to-interpret model, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, the main conclusions of this study are exposed.
NASA Astrophysics Data System (ADS)
Wang, S.; Ancell, B. C.; Huang, G. H.; Baetz, B. W.
2018-03-01
Data assimilation using the ensemble Kalman filter (EnKF) has been increasingly recognized as a promising tool for probabilistic hydrologic predictions. However, little effort has been made to conduct the pre- and post-processing of assimilation experiments, posing a significant challenge in achieving the best performance of hydrologic predictions. This paper presents a unified data assimilation framework for improving the robustness of hydrologic ensemble predictions. Statistical pre-processing of assimilation experiments is conducted through the factorial design and analysis to identify the best EnKF settings with maximized performance. After the data assimilation operation, statistical post-processing analysis is also performed through the factorial polynomial chaos expansion to efficiently address uncertainties in hydrologic predictions, as well as to explicitly reveal potential interactions among model parameters and their contributions to the predictive accuracy. In addition, the Gaussian anamorphosis is used to establish a seamless bridge between data assimilation and uncertainty quantification of hydrologic predictions. Both synthetic and real data assimilation experiments are carried out to demonstrate feasibility and applicability of the proposed methodology in the Guadalupe River basin, Texas. Results suggest that statistical pre- and post-processing of data assimilation experiments provide meaningful insights into the dynamic behavior of hydrologic systems and enhance robustness of hydrologic ensemble predictions.
Translating New Science Into the Drug Review Process
Rouse, Rodney; Kruhlak, Naomi; Weaver, James; Burkhart, Keith; Patel, Vikram; Strauss, David G.
2017-01-01
In 2011, the US Food and drug Administration (FDA) developed a strategic plan for regulatory science that focuses on developing new tools, standards, and approaches to assess the safety, efficacy, quality, and performance of FDA-regulated products. In line with this, the Division of Applied Regulatory Science was created to move new science into the Center for Drug Evaluation and Research (CDER) review process and close the gap between scientific innovation and drug review. The Division, located in the Office of Clinical Pharmacology, is unique in that it performs mission-critical applied research and review across the translational research spectrum including in vitro and in vivo laboratory research, in silico computational modeling and informatics, and integrated clinical research covering clinical pharmacology, experimental medicine, and postmarket analyses. The Division collaborates with Offices throughout CDER, across the FDA, other government agencies, academia, and industry. The Division is able to rapidly form interdisciplinary teams of pharmacologists, biologists, chemists, computational scientists, and clinicians to respond to challenging regulatory questions for specific review issues and for longer-range projects requiring the development of predictive models, tools, and biomarkers to speed the development and regulatory evaluation of safe and effective drugs. This article reviews the Division’s recent work and future directions, highlighting development and validation of biomarkers; novel humanized animal models; translational predictive safety combining in vitro, in silico, and in vivo clinical biomarkers; chemical and biomedical informatics tools for safety predictions; novel approaches to speed the development of complex generic drugs, biosimilars, and antibiotics; and precision medicine. PMID:29568713
The Future of Air Traffic Management
NASA Technical Reports Server (NTRS)
Denery, Dallas G.; Erzberger, Heinz; Edwards, Thomas A. (Technical Monitor)
1998-01-01
A system for the control of terminal area traffic to improve productivity, referred to as the Center-TRACON Automation System (CTAS), is being developed at NASA's Ames Research Center under a joint program with the FAA. CTAS consists of a set of integrated tools that provide computer-generated advisories for en-route and terminal area controllers. The premise behind the design of CTAS has been that successful planning of traffic requires accurate trajectory prediction. Data bases consisting of representative aircraft performance models, airline preferred operational procedures and a three dimensional wind model support the trajectory prediction. The research effort has been the design of a set of automation tools that make use of this trajectory prediction capability to assist controllers in overall management of traffic. The first tool, the Traffic Management Advisor (TMA), provides the overall flow management between the en route and terminal areas. A second tool, the Final Approach Spacing Tool (FAST) provides terminal area controllers with sequence and runway advisories to allow optimal use of the runways. The TMA and FAST are now being used in daily operations at Dallas/Ft. Worth airport. Additional activities include the development of several other tools. These include: 1) the En Route Descent Advisor that assist the en route controller in issuing conflict free descents and ascents; 2) the extension of FAST to include speed and heading advisories and the Expedite Departure Path (EDP) that assists the terminal controller in management of departures; and 3) the Collaborative Arrival Planner (CAP) that will assist the airlines in operational decision making. The purpose of this presentation is to review the CTAS concept and to present the results of recent field tests. The paper will first discuss the overall concept and then discuss the status of the individual tools.
NASA Astrophysics Data System (ADS)
Krumholz, Mark R.; Fumagalli, Michele; da Silva, Robert L.; Rendahl, Theodore; Parra, Jonathan
2015-09-01
Stellar population synthesis techniques for predicting the observable light emitted by a stellar population have extensive applications in numerous areas of astronomy. However, accurate predictions for small populations of young stars, such as those found in individual star clusters, star-forming dwarf galaxies, and small segments of spiral galaxies, require that the population be treated stochastically. Conversely, accurate deductions of the properties of such objects also require consideration of stochasticity. Here we describe a comprehensive suite of modular, open-source software tools for tackling these related problems. These include the following: a greatly-enhanced version of the SLUG code introduced by da Silva et al., which computes spectra and photometry for stochastically or deterministically sampled stellar populations with nearly arbitrary star formation histories, clustering properties, and initial mass functions; CLOUDY_SLUG, a tool that automatically couples SLUG-computed spectra with the CLOUDY radiative transfer code in order to predict stochastic nebular emission; BAYESPHOT, a general-purpose tool for performing Bayesian inference on the physical properties of stellar systems based on unresolved photometry; and CLUSTER_SLUG and SFR_SLUG, a pair of tools that use BAYESPHOT on a library of SLUG models to compute the mass, age, and extinction of mono-age star clusters, and the star formation rate of galaxies, respectively. The latter two tools make use of an extensive library of pre-computed stellar population models, which are included in the software. The complete package is available at http://www.slugsps.com.
Andrés, Mariano; Bernal, José Antonio; Sivera, Francisca; Quilis, Neus; Carmona, Loreto; Vela, Paloma; Pascual, Eliseo
2017-07-01
Gout-associated cardiovascular (CV) risk relates to comorbidities and crystal-led inflammation. The aim was to estimate the CV risk by prediction tools in new patients with gout and to assess whether ultrasonographic carotid changes are present in patients without high CV risk. Cross-sectional study. Consecutive new patients with crystal-proven gout underwent a structured CV consultation, including CV events, risk factors and two risk prediction tools-the Systematic COronary Evaluation (SCORE) and the Framingham Heart Study (FHS). CV risk was stratified according to current European guidelines. Carotid ultrasound (cUS) was performed in patients with less than very high CV risk. The presence of carotid plaques was studied depending on the SCORE and FHS by the area under the curve (AUC) of receiver operating curves. 237 new patients with gout were recruited. CV stratification by scores showed a predominance of very high (95 patients, 40.1%) and moderate (72 patients, 30.5%) risk levels. cUS was performed in 142 patients, finding atheroma plaques in 66 (46.5%, 95% CI 37.8 to 54.2). Following cUS findings, patients classified as very high risk increased from 40.1% up to 67.9% (161/237 patients). SCORE and FHS predicted moderately (AUC 0.711 and 0.683, respectively) the presence of atheroma plaques at cUS. The majority of patients presenting with gout may be at very high CV risk, indicating the need for initiating optimal prevention strategies at this stage. Risk prediction tools appear to underestimate the presence of carotid plaque in patients with gout. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Ba Nghiep; Fifield, Leonard S.; Gandhi, Umesh N.
This project proposed to integrate, optimize and validate the fiber orientation and length distribution models previously developed and implemented in the Autodesk Simulation Moldflow Insight (ASMI) package for injection-molded long-carbon-fiber thermoplastic composites into a cohesive prediction capability. The current effort focused on rendering the developed models more robust and efficient for automotive industry part design to enable weight savings and cost reduction. The project goal has been achieved by optimizing the developed models, improving and integrating their implementations in ASMI, and validating them for a complex 3D LCF thermoplastic automotive part (Figure 1). Both PP and PA66 were used asmore » resin matrices. After validating ASMI predictions for fiber orientation and fiber length for this complex part against the corresponding measured data, in collaborations with Toyota and Magna PNNL developed a method using the predictive engineering tool to assess LCF/PA66 complex part design in terms of stiffness performance. Structural three-point bending analyses of the complex part and similar parts in steel were then performed for this purpose, and the team has then demonstrated the use of stiffness-based complex part design assessment to evaluate weight savings relative to the body system target (≥ 35%) set in Table 2 of DE-FOA-0000648 (AOI #1). In addition, starting from the part-to-part analysis, the PE tools enabled an estimated weight reduction for the vehicle body system using 50 wt% LCF/PA66 parts relative to the current steel system. Also, from this analysis an estimate of the manufacturing cost including the material cost for making the equivalent part in steel has been determined and compared to the costs for making the LCF/PA66 part to determine the cost per “saved” pound.« less
Charles, Patrick G P; Wolfe, Rory; Whitby, Michael; Fine, Michael J; Fuller, Andrew J; Stirling, Robert; Wright, Alistair A; Ramirez, Julio A; Christiansen, Keryn J; Waterer, Grant W; Pierce, Robert J; Armstrong, John G; Korman, Tony M; Holmes, Peter; Obrosky, D Scott; Peyrani, Paula; Johnson, Barbara; Hooy, Michelle; Grayson, M Lindsay
2008-08-01
Existing severity assessment tools, such as the pneumonia severity index (PSI) and CURB-65 (tool based on confusion, urea level, respiratory rate, blood pressure, and age >or=65 years), predict 30-day mortality in community-acquired pneumonia (CAP) and have limited ability to predict which patients will require intensive respiratory or vasopressor support (IRVS). The Australian CAP Study (ACAPS) was a prospective study of 882 episodes in which each patient had a detailed assessment of severity features, etiology, and treatment outcomes. Multivariate logistic regression was performed to identify features at initial assessment that were associated with receipt of IRVS. These results were converted into a simple points-based severity tool that was validated in 5 external databases, totaling 7464 patients. In ACAPS, 10.3% of patients received IRVS, and the 30-day mortality rate was 5.7%. The features statistically significantly associated with receipt of IRVS were low systolic blood pressure (2 points), multilobar chest radiography involvement (1 point), low albumin level (1 point), high respiratory rate (1 point), tachycardia (1 point), confusion (1 point), poor oxygenation (2 points), and low arterial pH (2 points): SMART-COP. A SMART-COP score of >or=3 points identified 92% of patients who received IRVS, including 84% of patients who did not need immediate admission to the intensive care unit. Accuracy was also high in the 5 validation databases. Sensitivities of PSI and CURB-65 for identifying the need for IRVS were 74% and 39%, respectively. SMART-COP is a simple, practical clinical tool for accurately predicting the need for IRVS that is likely to assist clinicians in determining CAP severity.
USING DIRECT-PUSH TOOLS TO MAP HYDROSTRATIGRAPHY AND PREDICT MTBE PLUME DIVING
MTBE plumes have been documented to dive beneath screened intervals of conventional monitoring well networks at a number of LUST sites. This behavior makes these plumes difficult both to detect and remediate. Electrical conductivity logging and pneumatic slug testing performed in...
Water and wastewater infrastructure systems represent a major capital investment; utilities must ensure they are getting the highest yield possible on their investment, both in terms of dollars and water quality. Accurate information related to equipment, pipe characteristics, l...
Water and wastewater infrastructure systems represent a major capital investment; utilities must ensure they are getting the highest yield possible on their investment, both in terms of dollars and water quality. Accurate information related to equipment, pipe characteristics, lo...
NASA Astrophysics Data System (ADS)
Rana, Narender; Zhang, Yunlin; Wall, Donald; Dirahoui, Bachir; Bailey, Todd C.
2015-03-01
Integrate circuit (IC) technology is going through multiple changes in terms of patterning techniques (multiple patterning, EUV and DSA), device architectures (FinFET, nanowire, graphene) and patterning scale (few nanometers). These changes require tight controls on processes and measurements to achieve the required device performance, and challenge the metrology and process control in terms of capability and quality. Multivariate data with complex nonlinear trends and correlations generally cannot be described well by mathematical or parametric models but can be relatively easily learned by computing machines and used to predict or extrapolate. This paper introduces the predictive metrology approach which has been applied to three different applications. Machine learning and predictive analytics have been leveraged to accurately predict dimensions of EUV resist patterns down to 18 nm half pitch leveraging resist shrinkage patterns. These patterns could not be directly and accurately measured due to metrology tool limitations. Machine learning has also been applied to predict the electrical performance early in the process pipeline for deep trench capacitance and metal line resistance. As the wafer goes through various processes its associated cost multiplies. It may take days to weeks to get the electrical performance readout. Predicting the electrical performance early on can be very valuable in enabling timely actionable decision such as rework, scrap, feedforward, feedback predicted information or information derived from prediction to improve or monitor processes. This paper provides a general overview of machine learning and advanced analytics application in the advanced semiconductor development and manufacturing.
Performances of the PIPER scalable child human body model in accident reconstruction
Giordano, Chiara; Kleiven, Svein
2017-01-01
Human body models (HBMs) have the potential to provide significant insights into the pediatric response to impact. This study describes a scalable/posable approach to perform child accident reconstructions using the Position and Personalize Advanced Human Body Models for Injury Prediction (PIPER) scalable child HBM of different ages and in different positions obtained by the PIPER tool. Overall, the PIPER scalable child HBM managed reasonably well to predict the injury severity and location of the children involved in real-life crash scenarios documented in the medical records. The developed methodology and workflow is essential for future work to determine child injury tolerances based on the full Child Advanced Safety Project for European Roads (CASPER) accident reconstruction database. With the workflow presented in this study, the open-source PIPER scalable HBM combined with the PIPER tool is also foreseen to have implications for improved safety designs for a better protection of children in traffic accidents. PMID:29135997
NASA Technical Reports Server (NTRS)
Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron
1994-01-01
This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.
Navy Enhanced Sierra Mechanics (NESM): Toolbox for predicting Navy shock and damage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moyer, Thomas; Stergiou, Jonathan; Reese, Garth
Here, the US Navy is developing a new suite of computational mechanics tools (Navy Enhanced Sierra Mechanics) for the prediction of ship response, damage, and shock environments transmitted to vital systems during threat weapon encounters. NESM includes fully coupled Euler-Lagrange solvers tailored to ship shock/damage predictions. NESM is optimized to support high-performance computing architectures, providing the physics-based ship response/threat weapon damage predictions needed to support the design and assessment of highly survivable ships. NESM is being employed to support current Navy ship design and acquisition programs while being further developed for future Navy fleet needs.
Navy Enhanced Sierra Mechanics (NESM): Toolbox for predicting Navy shock and damage
Moyer, Thomas; Stergiou, Jonathan; Reese, Garth; ...
2016-05-25
Here, the US Navy is developing a new suite of computational mechanics tools (Navy Enhanced Sierra Mechanics) for the prediction of ship response, damage, and shock environments transmitted to vital systems during threat weapon encounters. NESM includes fully coupled Euler-Lagrange solvers tailored to ship shock/damage predictions. NESM is optimized to support high-performance computing architectures, providing the physics-based ship response/threat weapon damage predictions needed to support the design and assessment of highly survivable ships. NESM is being employed to support current Navy ship design and acquisition programs while being further developed for future Navy fleet needs.
Implementing Machine Learning in the PCWG Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clifton, Andrew; Ding, Yu; Stuart, Peter
The Power Curve Working Group (www.pcwg.org) is an ad-hoc industry-led group to investigate the performance of wind turbines in real-world conditions. As part of ongoing experience-sharing exercises, machine learning has been proposed as a possible way to predict turbine performance. This presentation provides some background information about machine learning and how it might be implemented in the PCWG exercises.
Addressing multi-label imbalance problem of surgical tool detection using CNN.
Sahu, Manish; Mukhopadhyay, Anirban; Szengel, Angelika; Zachow, Stefan
2017-06-01
A fully automated surgical tool detection framework is proposed for endoscopic video streams. State-of-the-art surgical tool detection methods rely on supervised one-vs-all or multi-class classification techniques, completely ignoring the co-occurrence relationship of the tools and the associated class imbalance. In this paper, we formulate tool detection as a multi-label classification task where tool co-occurrences are treated as separate classes. In addition, imbalance on tool co-occurrences is analyzed and stratification techniques are employed to address the imbalance during convolutional neural network (CNN) training. Moreover, temporal smoothing is introduced as an online post-processing step to enhance runtime prediction. Quantitative analysis is performed on the M2CAI16 tool detection dataset to highlight the importance of stratification, temporal smoothing and the overall framework for tool detection. The analysis on tool imbalance, backed by the empirical results, indicates the need and superiority of the proposed framework over state-of-the-art techniques.
The diagnostic value of troponin T testing in the community setting.
Planer, David; Leibowitz, David; Paltiel, Ora; Boukhobza, Rina; Lotan, Chaim; Weiss, Teddy A
2006-03-08
Many patients presenting with chest pain to their family physician are referred to the emergency room, in part, due to lack of accurate objective diagnostic tools. This study aimed to assess the diagnostic value of bedside troponin T kit testing in patients presenting with chest pain to their family physician. Prospective, multi-center study. Consecutive subjects with chest pain were recruited from 44 community clinics in Jerusalem. Following clinical assessment by the family physician, qualitative troponin kit testing was performed. Patients with a negative clinical assessment and negative troponin kit were sent home and all others were referred to the emergency room. The final diagnosis at the time of hospital discharge was recorded and telephone follow up was performed after 60 days. Positive predictive value, negative predictive value, sensitivity and specificity of troponin kit for myocardial infarction diagnosis and of family physician for hospitalization, were assessed. Of 392 patients enrolled, 349 (89%) were included in the final analysis. The prevalence of myocardial infarction was 1.7%. The positive and negative predictive values of the troponin kit for myocardial infarction diagnosis were 100% and 99.7%, respectively. The positive and negative predictive values of the family physician's assessment to predict hospitalization were 41.4% and 94.1%, respectively. Troponin kit testing is an important tool to assist the family physician in the assessment of patients with chest pain in the community setting. Troponin kit testing may identify otherwise undiagnosed cases of myocardial infarctions, and reduce unnecessary referrals to the emergency room.
Gupta, Shikha; Basant, Nikita; Mohan, Dinesh; Singh, Kunwar P
2016-07-01
The persistence and the removal of organic chemicals from the atmosphere are largely determined by their reactions with the OH radical and O3. Experimental determinations of the kinetic rate constants of OH and O3 with a large number of chemicals are tedious and resource intensive and development of computational approaches has widely been advocated. Recently, ensemble machine learning (EML) methods have emerged as unbiased tools to establish relationship between independent and dependent variables having a nonlinear dependence. In this study, EML-based, temperature-dependent quantitative structure-reactivity relationship (QSRR) models have been developed for predicting the kinetic rate constants for OH (kOH) and O3 (kO3) reactions with diverse chemicals. Structural diversity of chemicals was evaluated using a Tanimoto similarity index. The generalization and prediction abilities of the constructed models were established through rigorous internal and external validation performed employing statistical checks. In test data, the EML QSRR models yielded correlation (R (2)) of ≥0.91 between the measured and the predicted reactivities. The applicability domains of the constructed models were determined using methods based on descriptors range, Euclidean distance, leverage, and standardization approaches. The prediction accuracies for the higher reactivity compounds were relatively better than those of the low reactivity compounds. Proposed EML QSRR models performed well and outperformed the previous reports. The proposed QSRR models can make predictions of rate constants at different temperatures. The proposed models can be useful tools in predicting the reactivities of chemicals towards OH radical and O3 in the atmosphere.
A Unified Model of Performance for Predicting the Effects of Sleep and Caffeine.
Ramakrishnan, Sridhar; Wesensten, Nancy J; Kamimori, Gary H; Moon, James E; Balkin, Thomas J; Reifman, Jaques
2016-10-01
Existing mathematical models of neurobehavioral performance cannot predict the beneficial effects of caffeine across the spectrum of sleep loss conditions, limiting their practical utility. Here, we closed this research gap by integrating a model of caffeine effects with the recently validated unified model of performance (UMP) into a single, unified modeling framework. We then assessed the accuracy of this new UMP in predicting performance across multiple studies. We hypothesized that the pharmacodynamics of caffeine vary similarly during both wakefulness and sleep, and that caffeine has a multiplicative effect on performance. Accordingly, to represent the effects of caffeine in the UMP, we multiplied a dose-dependent caffeine factor (which accounts for the pharmacokinetics and pharmacodynamics of caffeine) to the performance estimated in the absence of caffeine. We assessed the UMP predictions in 14 distinct laboratory- and field-study conditions, including 7 different sleep-loss schedules (from 5 h of sleep per night to continuous sleep loss for 85 h) and 6 different caffeine doses (from placebo to repeated 200 mg doses to a single dose of 600 mg). The UMP accurately predicted group-average psychomotor vigilance task performance data across the different sleep loss and caffeine conditions (6% < error < 27%), yielding greater accuracy for mild and moderate sleep loss conditions than for more severe cases. Overall, accounting for the effects of caffeine resulted in improved predictions (after caffeine consumption) by up to 70%. The UMP provides the first comprehensive tool for accurate selection of combinations of sleep schedules and caffeine countermeasure strategies to optimize neurobehavioral performance. © 2016 Associated Professional Sleep Societies, LLC.
Designing and benchmarking the MULTICOM protein structure prediction system
2013-01-01
Background Predicting protein structure from sequence is one of the most significant and challenging problems in bioinformatics. Numerous bioinformatics techniques and tools have been developed to tackle almost every aspect of protein structure prediction ranging from structural feature prediction, template identification and query-template alignment to structure sampling, model quality assessment, and model refinement. How to synergistically select, integrate and improve the strengths of the complementary techniques at each prediction stage and build a high-performance system is becoming a critical issue for constructing a successful, competitive protein structure predictor. Results Over the past several years, we have constructed a standalone protein structure prediction system MULTICOM that combines multiple sources of information and complementary methods at all five stages of the protein structure prediction process including template identification, template combination, model generation, model assessment, and model refinement. The system was blindly tested during the ninth Critical Assessment of Techniques for Protein Structure Prediction (CASP9) in 2010 and yielded very good performance. In addition to studying the overall performance on the CASP9 benchmark, we thoroughly investigated the performance and contributions of each component at each stage of prediction. Conclusions Our comprehensive and comparative study not only provides useful and practical insights about how to select, improve, and integrate complementary methods to build a cutting-edge protein structure prediction system but also identifies a few new sources of information that may help improve the design of a protein structure prediction system. Several components used in the MULTICOM system are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/. PMID:23442819
Web tools for predictive toxicology model building.
Jeliazkova, Nina
2012-07-01
The development and use of web tools in chemistry has accumulated more than 15 years of history already. Powered by the advances in the Internet technologies, the current generation of web systems are starting to expand into areas, traditional for desktop applications. The web platforms integrate data storage, cheminformatics and data analysis tools. The ease of use and the collaborative potential of the web is compelling, despite the challenges. The topic of this review is a set of recently published web tools that facilitate predictive toxicology model building. The focus is on software platforms, offering web access to chemical structure-based methods, although some of the frameworks could also provide bioinformatics or hybrid data analysis functionalities. A number of historical and current developments are cited. In order to provide comparable assessment, the following characteristics are considered: support for workflows, descriptor calculations, visualization, modeling algorithms, data management and data sharing capabilities, availability of GUI or programmatic access and implementation details. The success of the Web is largely due to its highly decentralized, yet sufficiently interoperable model for information access. The expected future convergence between cheminformatics and bioinformatics databases provides new challenges toward management and analysis of large data sets. The web tools in predictive toxicology will likely continue to evolve toward the right mix of flexibility, performance, scalability, interoperability, sets of unique features offered, friendly user interfaces, programmatic access for advanced users, platform independence, results reproducibility, curation and crowdsourcing utilities, collaborative sharing and secure access.
Validation of RetroPath, a computer-aided design tool for metabolic pathway engineering.
Fehér, Tamás; Planson, Anne-Gaëlle; Carbonell, Pablo; Fernández-Castané, Alfred; Grigoras, Ioana; Dariy, Ekaterina; Perret, Alain; Faulon, Jean-Loup
2014-11-01
Metabolic engineering has succeeded in biosynthesis of numerous commodity or high value compounds. However, the choice of pathways and enzymes used for production was many times made ad hoc, or required expert knowledge of the specific biochemical reactions. In order to rationalize the process of engineering producer strains, we developed the computer-aided design (CAD) tool RetroPath that explores and enumerates metabolic pathways connecting the endogenous metabolites of a chassis cell to the target compound. To experimentally validate our tool, we constructed 12 top-ranked enzyme combinations producing the flavonoid pinocembrin, four of which displayed significant yields. Namely, our tool queried the enzymes found in metabolic databases based on their annotated and predicted activities. Next, it ranked pathways based on the predicted efficiency of the available enzymes, the toxicity of the intermediate metabolites and the calculated maximum product flux. To implement the top-ranking pathway, our procedure narrowed down a list of nine million possible enzyme combinations to 12, a number easily assembled and tested. One round of metabolic network optimization based on RetroPath output further increased pinocembrin titers 17-fold. In total, 12 out of the 13 enzymes tested in this work displayed a relative performance that was in accordance with its predicted score. These results validate the ranking function of our CAD tool, and open the way to its utilization in the biosynthesis of novel compounds. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Deep convolutional neural networks for pan-specific peptide-MHC class I binding prediction.
Han, Youngmahn; Kim, Dongsup
2017-12-28
Computational scanning of peptide candidates that bind to a specific major histocompatibility complex (MHC) can speed up the peptide-based vaccine development process and therefore various methods are being actively developed. Recently, machine-learning-based methods have generated successful results by training large amounts of experimental data. However, many machine learning-based methods are generally less sensitive in recognizing locally-clustered interactions, which can synergistically stabilize peptide binding. Deep convolutional neural network (DCNN) is a deep learning method inspired by visual recognition process of animal brain and it is known to be able to capture meaningful local patterns from 2D images. Once the peptide-MHC interactions can be encoded into image-like array(ILA) data, DCNN can be employed to build a predictive model for peptide-MHC binding prediction. In this study, we demonstrated that DCNN is able to not only reliably predict peptide-MHC binding, but also sensitively detect locally-clustered interactions. Nonapeptide-HLA-A and -B binding data were encoded into ILA data. A DCNN, as a pan-specific prediction model, was trained on the ILA data. The DCNN showed higher performance than other prediction tools for the latest benchmark datasets, which consist of 43 datasets for 15 HLA-A alleles and 25 datasets for 10 HLA-B alleles. In particular, the DCNN outperformed other tools for alleles belonging to the HLA-A3 supertype. The F1 scores of the DCNN were 0.86, 0.94, and 0.67 for HLA-A*31:01, HLA-A*03:01, and HLA-A*68:01 alleles, respectively, which were significantly higher than those of other tools. We found that the DCNN was able to recognize locally-clustered interactions that could synergistically stabilize peptide binding. We developed ConvMHC, a web server to provide user-friendly web interfaces for peptide-MHC class I binding predictions using the DCNN. ConvMHC web server can be accessible via http://jumong.kaist.ac.kr:8080/convmhc . We developed a novel method for peptide-HLA-I binding predictions using DCNN trained on ILA data that encode peptide binding data and demonstrated the reliable performance of the DCNN in nonapeptide binding predictions through the independent evaluation on the latest IEDB benchmark datasets. Our approaches can be applied to characterize locally-clustered patterns in molecular interactions, such as protein/DNA, protein/RNA, and drug/protein interactions.
Towards a generalized energy prediction model for machine tools
Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H.; Dornfeld, David A.; Helu, Moneer; Rachuri, Sudarsan
2017-01-01
Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process. PMID:28652687
Towards a generalized energy prediction model for machine tools.
Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan
2017-04-01
Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.
Contamination Effects on EUV Optics
NASA Technical Reports Server (NTRS)
Tveekrem, J.
1999-01-01
During ground-based assembly and upon exposure to the space environment, optical surfaces accumulate both particles and molecular condensibles, inevitably resulting in degradation of optical instrument performance. Currently, this performance degradation (and the resulting end-of-life instrument performance) cannot be predicted with sufficient accuracy using existing software tools. Optical design codes exist to calculate instrument performance, but these codes generally assume uncontaminated optical surfaces. Contamination models exist which predict approximate end-of-life contamination levels, but the optical effects of these contamination levels can not be quantified without detailed information about the optical constants and scattering properties of the contaminant. The problem is particularly pronounced in the extreme ultraviolet (EUV, 300-1,200 A) and far (FUV, 1,200-2,000 A) regimes due to a lack of data and a lack of knowledge of the detailed physical and chemical processes involved. Yet it is in precisely these wavelength regimes that accurate predictions are most important, because EUV/FUV instruments are extremely sensitive to contamination.
Wave Rotor Research and Technology Development
NASA Technical Reports Server (NTRS)
Welch, Gerard E.
1998-01-01
Wave rotor technology offers the potential to increase the performance of gas turbine engines significantly, within the constraints imposed by current material temperature limits. The wave rotor research at the NASA Lewis Research Center is a three-element effort: 1) Development of design and analysis tools to accurately predict the performance of wave rotor components; 2) Experiments to characterize component performance; 3) System integration studies to evaluate the effect of wave rotor topping on the gas turbine engine system.
Hériché, Jean-Karim; Lees, Jon G.; Morilla, Ian; Walter, Thomas; Petrova, Boryana; Roberti, M. Julia; Hossain, M. Julius; Adler, Priit; Fernández, José M.; Krallinger, Martin; Haering, Christian H.; Vilo, Jaak; Valencia, Alfonso; Ranea, Juan A.; Orengo, Christine; Ellenberg, Jan
2014-01-01
The advent of genome-wide RNA interference (RNAi)–based screens puts us in the position to identify genes for all functions human cells carry out. However, for many functions, assay complexity and cost make genome-scale knockdown experiments impossible. Methods to predict genes required for cell functions are therefore needed to focus RNAi screens from the whole genome on the most likely candidates. Although different bioinformatics tools for gene function prediction exist, they lack experimental validation and are therefore rarely used by experimentalists. To address this, we developed an effective computational gene selection strategy that represents public data about genes as graphs and then analyzes these graphs using kernels on graph nodes to predict functional relationships. To demonstrate its performance, we predicted human genes required for a poorly understood cellular function—mitotic chromosome condensation—and experimentally validated the top 100 candidates with a focused RNAi screen by automated microscopy. Quantitative analysis of the images demonstrated that the candidates were indeed strongly enriched in condensation genes, including the discovery of several new factors. By combining bioinformatics prediction with experimental validation, our study shows that kernels on graph nodes are powerful tools to integrate public biological data and predict genes involved in cellular functions of interest. PMID:24943848
NASA Technical Reports Server (NTRS)
Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.
Performance of PRISM III and PELOD-2 scores in a pediatric intensive care unit.
Gonçalves, Jean-Pierre; Severo, Milton; Rocha, Carla; Jardim, Joana; Mota, Teresa; Ribeiro, Augusto
2015-10-01
The study aims were to compare two models (The Pediatric Risk of Mortality III (PRISM III) and Pediatric Logistic Organ Dysfunction (PELOD-2)) for prediction of mortality in a pediatric intensive care unit (PICU) and recalibrate PELOD-2 in a Portuguese population. To achieve the previous goal, a prospective cohort study to evaluate score performance (standardized mortality ratio, discrimination, and calibration) for both models was performed. A total of 556 patients consecutively admitted to our PICU between January 2011 and December 2012 were included in the analysis. The median age was 65 months, with an interquartile range of 1 month to 17 years. The male-to-female ratio was 1.5. The median length of PICU stay was 3 days. The overall predicted number of deaths using PRISM III score was 30.8 patients whereas that by PELOD-2 was 22.1 patients. The observed mortality was 29 patients. The area under the receiver operating characteristics curve for the two models was 0.92 and 0.94, respectively. The Hosmer and Lemeshow goodness-of-fit test showed a good calibration only for PRISM III (PRISM III: χ (2) = 3.820, p = 0.282; PELOD-2: χ (2) = 9.576, p = 0.022). Both scores had good discrimination. PELOD-2 needs recalibration to be a better reliable prediction tool. • PRISM III (Pediatric Risk of Mortality III) and PELOD (Pediatric Logistic Organ Dysfunction) scores are frequently used to assess the performance of intensive care units and also for mortality prediction in the pediatric population. • Pediatric Logistic Organ Dysfunction 2 is the newer version of PELOD and has recently been validated with good discrimination and calibration. What is New: • In our population, both scores had good discrimination. • PELOD-2 needs recalibration to be a better reliable prediction tool.
Downar, James; Goldman, Russell; Pinto, Ruxandra; Englesakis, Marina; Adhikari, Neill K J
2017-04-03
The surprise question - "Would I be surprised if this patient died in the next 12 months?" - has been used to identify patients at high risk of death who might benefit from palliative care services. Our objective was to systematically review the performance characteristics of the surprise question in predicting death. We searched multiple electronic databases from inception to 2016 to identify studies that prospectively screened patients with the surprise question and reported on death at 6 to 18 months. We constructed models of hierarchical summary receiver operating characteristics (sROCs) to determine prognostic performance. Sixteen studies (17 cohorts, 11 621 patients) met the selection criteria. For the outcome of death at 6 to 18 months, the pooled prognostic characteristics were sensitivity 67.0% (95% confidence interval [CI] 55.7%-76.7%), specificity 80.2% (73.3%-85.6%), positive likelihood ratio 3.4 (95% CI 2.8-4.1), negative likelihood ratio 0.41 (95% CI 0.32-0.54), positive predictive value 37.1% (95% CI 30.2%-44.6%) and negative predictive value 93.1% (95% CI 91.0%-94.8%). The surprise question had worse discrimination in patients with noncancer illness (area under sROC curve 0.77 [95% CI 0.73-0.81]) than in patients with cancer (area under sROC curve 0.83 [95% CI 0.79-0.87; p = 0.02 for difference]). Most studies had a moderate to high risk of bias, often because they had a low or unknown participation rate or had missing data. The surprise question performs poorly to modestly as a predictive tool for death, with worse performance in noncancer illness. Further studies are needed to develop accurate tools to identify patients with palliative care needs and to assess the surprise question for this purpose. © 2017 Canadian Medical Association or its licensors.
An unsupervised classification scheme for improving predictions of prokaryotic TIS.
Tech, Maike; Meinicke, Peter
2006-03-09
Although it is not difficult for state-of-the-art gene finders to identify coding regions in prokaryotic genomes, exact prediction of the corresponding translation initiation sites (TIS) is still a challenging problem. Recently a number of post-processing tools have been proposed for improving the annotation of prokaryotic TIS. However, inherent difficulties of these approaches arise from the considerable variation of TIS characteristics across different species. Therefore prior assumptions about the properties of prokaryotic gene starts may cause suboptimal predictions for newly sequenced genomes with TIS signals differing from those of well-investigated genomes. We introduce a clustering algorithm for completely unsupervised scoring of potential TIS, based on positionally smoothed probability matrices. The algorithm requires an initial gene prediction and the genomic sequence of the organism to perform the reannotation. As compared with other methods for improving predictions of gene starts in bacterial genomes, our approach is not based on any specific assumptions about prokaryotic TIS. Despite the generality of the underlying algorithm, the prediction rate of our method is competitive on experimentally verified test data from E. coli and B. subtilis. Regarding genomes with high G+C content, in contrast to some previously proposed methods, our algorithm also provides good performance on P. aeruginosa, B. pseudomallei and R. solanacearum. On reliable test data we showed that our method provides good results in post-processing the predictions of the widely-used program GLIMMER. The underlying clustering algorithm is robust with respect to variations in the initial TIS annotation and does not require specific assumptions about prokaryotic gene starts. These features are particularly useful on genomes with high G+C content. The algorithm has been implemented in the tool "TICO" (TIs COrrector) which is publicly available from our web site.
RepeatsDB-lite: a web server for unit annotation of tandem repeat proteins.
Hirsh, Layla; Paladin, Lisanna; Piovesan, Damiano; Tosatto, Silvio C E
2018-05-09
RepeatsDB-lite (http://protein.bio.unipd.it/repeatsdb-lite) is a web server for the prediction of repetitive structural elements and units in tandem repeat (TR) proteins. TRs are a widespread but poorly annotated class of non-globular proteins carrying heterogeneous functions. RepeatsDB-lite extends the prediction to all TR types and strongly improves the performance both in terms of computational time and accuracy over previous methods, with precision above 95% for solenoid structures. The algorithm exploits an improved TR unit library derived from the RepeatsDB database to perform an iterative structural search and assignment. The web interface provides tools for analyzing the evolutionary relationships between units and manually refine the prediction by changing unit positions and protein classification. An all-against-all structure-based sequence similarity matrix is calculated and visualized in real-time for every user edit. Reviewed predictions can be submitted to RepeatsDB for review and inclusion.
NASA Technical Reports Server (NTRS)
Bienert, Nancy; Mercer, Joey; Homola, Jeffrey; Morey, Susan; Prevot, Thomas
2014-01-01
This paper presents a case study of how factors such as wind prediction errors and metering delays can influence controller performance and workload in Human-In-The-Loop simulations. Retired air traffic controllers worked two arrival sectors adjacent to the terminal area. The main tasks were to provide safe air traffic operations and deliver the aircraft to the metering fix within +/- 25 seconds of the scheduled arrival time with the help of provided decision support tools. Analyses explore the potential impact of metering delays and system uncertainties on controller workload and performance. The results suggest that trajectory prediction uncertainties impact safety performance, while metering fix accuracy and workload appear subject to the scenario difficulty.
Failure mode analysis to predict product reliability.
NASA Technical Reports Server (NTRS)
Zemanick, P. P.
1972-01-01
The failure mode analysis (FMA) is described as a design tool to predict and improve product reliability. The objectives of the failure mode analysis are presented as they influence component design, configuration selection, the product test program, the quality assurance plan, and engineering analysis priorities. The detailed mechanics of performing a failure mode analysis are discussed, including one suggested format. Some practical difficulties of implementation are indicated, drawn from experience with preparing FMAs on the nuclear rocket engine program.
2015-10-30
predictors of ACL injury.25 189 Several studies investigate the effects of faulty movement and injury 190 prediction for the lower extremity. In 2006...at 40% and 39% of the total injuries, respectively.16 In 2012, 83 193 NCAA Division I football players participated in a survey to assess low back...recent study , firefighters performed the FMS™ and firefighter-specific testing. Two 218 of the musculoskeletal movement variables were predictive of
New Tool Released for Engine-Airframe Blade-Out Structural Simulations
NASA Technical Reports Server (NTRS)
Lawrence, Charles
2004-01-01
Researchers at the NASA Glenn Research Center have enhanced a general-purpose finite element code, NASTRAN, for engine-airframe structural simulations during steady-state and transient operating conditions. For steady-state simulations, the code can predict critical operating speeds, natural modes of vibration, and forced response (e.g., cabin noise and component fatigue). The code can be used to perform static analysis to predict engine-airframe response and component stresses due to maneuver loads. For transient response, the simulation code can be used to predict response due to bladeoff events and subsequent engine shutdown and windmilling conditions. In addition, the code can be used as a pretest analysis tool to predict the results of the bladeout test required for FAA certification of new and derivative aircraft engines. Before the present analysis code was developed, all the major aircraft engine and airframe manufacturers in the United States and overseas were performing similar types of analyses to ensure the structural integrity of engine-airframe systems. Although there were many similarities among the analysis procedures, each manufacturer was developing and maintaining its own structural analysis capabilities independently. This situation led to high software development and maintenance costs, complications with manufacturers exchanging models and results, and limitations in predicting the structural response to the desired degree of accuracy. An industry-NASA team was formed to overcome these problems by developing a common analysis tool that would satisfy all the structural analysis needs of the industry and that would be available and supported by a commercial software vendor so that the team members would be relieved of maintenance and development responsibilities. Input from all the team members was used to ensure that everyone's requirements were satisfied and that the best technology was incorporated into the code. Furthermore, because the code would be distributed by a commercial software vendor, it would be more readily available to engine and airframe manufacturers, as well as to nonaircraft companies that did not previously have access to this capability.
Bendl, Jaroslav; Musil, Miloš; Štourač, Jan; Zendulka, Jaroslav; Damborský, Jiří; Brezovský, Jan
2016-05-01
An important message taken from human genome sequencing projects is that the human population exhibits approximately 99.9% genetic similarity. Variations in the remaining parts of the genome determine our identity, trace our history and reveal our heritage. The precise delineation of phenotypically causal variants plays a key role in providing accurate personalized diagnosis, prognosis, and treatment of inherited diseases. Several computational methods for achieving such delineation have been reported recently. However, their ability to pinpoint potentially deleterious variants is limited by the fact that their mechanisms of prediction do not account for the existence of different categories of variants. Consequently, their output is biased towards the variant categories that are most strongly represented in the variant databases. Moreover, most such methods provide numeric scores but not binary predictions of the deleteriousness of variants or confidence scores that would be more easily understood by users. We have constructed three datasets covering different types of disease-related variants, which were divided across five categories: (i) regulatory, (ii) splicing, (iii) missense, (iv) synonymous, and (v) nonsense variants. These datasets were used to develop category-optimal decision thresholds and to evaluate six tools for variant prioritization: CADD, DANN, FATHMM, FitCons, FunSeq2 and GWAVA. This evaluation revealed some important advantages of the category-based approach. The results obtained with the five best-performing tools were then combined into a consensus score. Additional comparative analyses showed that in the case of missense variations, protein-based predictors perform better than DNA sequence-based predictors. A user-friendly web interface was developed that provides easy access to the five tools' predictions, and their consensus scores, in a user-understandable format tailored to the specific features of different categories of variations. To enable comprehensive evaluation of variants, the predictions are complemented with annotations from eight databases. The web server is freely available to the community at http://loschmidt.chemi.muni.cz/predictsnp2.
Aeromechanics and Aeroacoustics Predictions of the Boeing-SMART Rotor Using Coupled-CFD/CSD Analyses
NASA Technical Reports Server (NTRS)
Bain, Jeremy; Sim, Ben W.; Sankar, Lakshmi; Brentner, Ken
2010-01-01
This paper will highlight helicopter aeromechanics and aeroacoustics prediction capabilities developed by Georgia Institute of Technology, the Pennsylvania State University, and Northern Arizona University under the Helicopter Quieting Program (HQP) sponsored by the Tactical Technology Office of the Defense Advanced Research Projects Agency (DARPA). First initiated in 2004, the goal of the HQP was to develop high fidelity, state-of-the-art computational tools for designing advanced helicopter rotors with reduced acoustic perceptibility and enhanced performance. A critical step towards achieving this objective is the development of rotorcraft prediction codes capable of assessing a wide range of helicopter configurations and operations for future rotorcraft designs. This includes novel next-generation rotor systems that incorporate innovative passive and/or active elements to meet future challenging military performance and survivability goals.
WFIRST: Data/Instrument Simulation Support at IPAC
NASA Astrophysics Data System (ADS)
Laine, Seppo; Akeson, Rachel; Armus, Lee; Bennett, Lee; Colbert, James; Helou, George; Kirkpatrick, J. Davy; Meshkat, Tiffany; Paladini, Roberta; Ramirez, Solange; Wang, Yun; Xie, Joan; Yan, Lin
2018-01-01
As part of WFIRST Science Center preparations, the IPAC Science Operations Center (ISOC) maintains a repository of 1) WFIRST data and instrument simulations, 2) tools to facilitate scientific performance and feasibility studies using the WFIRST, and 3) parameters summarizing the current design and predicted performance of the WFIRST telescope and instruments. The simulation repository provides access for the science community to simulation code, tools, and resulting analyses. Examples of simulation code with ISOC-built web-based interfaces include EXOSIMS (for estimating exoplanet yields in CGI surveys) and the Galaxy Survey Exposure Time Calculator. In the future the repository will provide an interface for users to run custom simulations of a wide range of coronagraph instrument (CGI) observations and sophisticated tools for designing microlensing experiments. We encourage those who are generating simulations or writing tools for exoplanet observations with WFIRST to contact the ISOC team so we can work with you to bring these to the attention of the broader astronomical community as we prepare for the exciting science that will be enabled by WFIRST.
Nose-to-tail analysis of an airbreathing hypersonic vehicle using an in-house simplified tool
NASA Astrophysics Data System (ADS)
Piscitelli, Filomena; Cutrone, Luigi; Pezzella, Giuseppe; Roncioni, Pietro; Marini, Marco
2017-07-01
SPREAD (Scramjet PREliminary Aerothermodynamic Design) is a simplified, in-house method developed by CIRA (Italian Aerospace Research Centre), able to provide a preliminary estimation of the performance of engine/aeroshape for airbreathing configurations. It is especially useful for scramjet engines, for which the strong coupling between the aerothermodynamic (external) and propulsive (internal) flow fields requires real-time screening of several engine/aeroshape configurations and the identification of the most promising one/s with respect to user-defined constraints and requirements. The outcome of this tool defines the base-line configuration for further design analyses with more accurate tools, e.g., CFD simulations and wind tunnel testing. SPREAD tool has been used to perform the nose-to-tail analysis of the LAPCAT-II Mach 8 MR2.4 vehicle configuration. The numerical results demonstrate SPREAD capability to quickly predict reliable values of aero-propulsive balance (i.e., net-thrust) and aerodynamic efficiency in a pre-design phase.
Hsu, Kuo-Hsiang; Su, Bo-Han; Tu, Yi-Shu; Lin, Olivia A.; Tseng, Yufeng J.
2016-01-01
With advances in the development and application of Ames mutagenicity in silico prediction tools, the International Conference on Harmonisation (ICH) has amended its M7 guideline to reflect the use of such prediction models for the detection of mutagenic activity in early drug safety evaluation processes. Since current Ames mutagenicity prediction tools only focus on functional group alerts or side chain modifications of an analog series, these tools are unable to identify mutagenicity derived from core structures or specific scaffolds of a compound. In this study, a large collection of 6512 compounds are used to perform scaffold tree analysis. By relating different scaffolds on constructed scaffold trees with Ames mutagenicity, four major and one minor novel mutagenic groups of scaffold are identified. The recognized mutagenic groups of scaffold can serve as a guide for medicinal chemists to prevent the development of potentially mutagenic therapeutic agents in early drug design or development phases, by modifying the core structures of mutagenic compounds to form non-mutagenic compounds. In addition, five series of substructures are provided as recommendations, for direct modification of potentially mutagenic scaffolds to decrease associated mutagenic activities. PMID:26863515
Artificial neural networks as a useful tool to predict the risk level of Betula pollen in the air
NASA Astrophysics Data System (ADS)
Castellano-Méndez, M.; Aira, M. J.; Iglesias, I.; Jato, V.; González-Manteiga, W.
2005-05-01
An increasing percentage of the European population suffers from allergies to pollen. The study of the evolution of air pollen concentration supplies prior knowledge of the levels of pollen in the air, which can be useful for the prevention and treatment of allergic symptoms, and the management of medical resources. The symptoms of Betula pollinosis can be associated with certain levels of pollen in the air. The aim of this study was to predict the risk of the concentration of pollen exceeding a given level, using previous pollen and meteorological information, by applying neural network techniques. Neural networks are a widespread statistical tool useful for the study of problems associated with complex or poorly understood phenomena. The binary response variable associated with each level requires a careful selection of the neural network and the error function associated with the learning algorithm used during the training phase. The performance of the neural network with the validation set showed that the risk of the pollen level exceeding a certain threshold can be successfully forecasted using artificial neural networks. This prediction tool may be implemented to create an automatic system that forecasts the risk of suffering allergic symptoms.
Realistic wave-optics simulation of X-ray phase-contrast imaging at a human scale
Sung, Yongjin; Segars, W. Paul; Pan, Adam; Ando, Masami; Sheppard, Colin J. R.; Gupta, Rajiv
2015-01-01
X-ray phase-contrast imaging (XPCI) can dramatically improve soft tissue contrast in X-ray medical imaging. Despite worldwide efforts to develop novel XPCI systems, a numerical framework to rigorously predict the performance of a clinical XPCI system at a human scale is not yet available. We have developed such a tool by combining a numerical anthropomorphic phantom defined with non-uniform rational B-splines (NURBS) and a wave optics-based simulator that can accurately capture the phase-contrast signal from a human-scaled numerical phantom. Using a synchrotron-based, high-performance XPCI system, we provide qualitative comparison between simulated and experimental images. Our tool can be used to simulate the performance of XPCI on various disease entities and compare proposed XPCI systems in an unbiased manner. PMID:26169570
Realistic wave-optics simulation of X-ray phase-contrast imaging at a human scale
NASA Astrophysics Data System (ADS)
Sung, Yongjin; Segars, W. Paul; Pan, Adam; Ando, Masami; Sheppard, Colin J. R.; Gupta, Rajiv
2015-07-01
X-ray phase-contrast imaging (XPCI) can dramatically improve soft tissue contrast in X-ray medical imaging. Despite worldwide efforts to develop novel XPCI systems, a numerical framework to rigorously predict the performance of a clinical XPCI system at a human scale is not yet available. We have developed such a tool by combining a numerical anthropomorphic phantom defined with non-uniform rational B-splines (NURBS) and a wave optics-based simulator that can accurately capture the phase-contrast signal from a human-scaled numerical phantom. Using a synchrotron-based, high-performance XPCI system, we provide qualitative comparison between simulated and experimental images. Our tool can be used to simulate the performance of XPCI on various disease entities and compare proposed XPCI systems in an unbiased manner.
MRIQC: Advancing the automatic prediction of image quality in MRI from unseen sites
2017-01-01
Quality control of MRI is essential for excluding problematic acquisitions and avoiding bias in subsequent image processing and analysis. Visual inspection is subjective and impractical for large scale datasets. Although automated quality assessments have been demonstrated on single-site datasets, it is unclear that solutions can generalize to unseen data acquired at new sites. Here, we introduce the MRI Quality Control tool (MRIQC), a tool for extracting quality measures and fitting a binary (accept/exclude) classifier. Our tool can be run both locally and as a free online service via the OpenNeuro.org portal. The classifier is trained on a publicly available, multi-site dataset (17 sites, N = 1102). We perform model selection evaluating different normalization and feature exclusion approaches aimed at maximizing across-site generalization and estimate an accuracy of 76%±13% on new sites, using leave-one-site-out cross-validation. We confirm that result on a held-out dataset (2 sites, N = 265) also obtaining a 76% accuracy. Even though the performance of the trained classifier is statistically above chance, we show that it is susceptible to site effects and unable to account for artifacts specific to new sites. MRIQC performs with high accuracy in intra-site prediction, but performance on unseen sites leaves space for improvement which might require more labeled data and new approaches to the between-site variability. Overcoming these limitations is crucial for a more objective quality assessment of neuroimaging data, and to enable the analysis of extremely large and multi-site samples. PMID:28945803
Expression signature as a biomarker for prenatal diagnosis of trisomy 21.
Volk, Marija; Maver, Aleš; Lovrečić, Luca; Juvan, Peter; Peterlin, Borut
2013-01-01
A universal biomarker panel with the potential to predict high-risk pregnancies or adverse pregnancy outcome does not exist. Transcriptome analysis is a powerful tool to capture differentially expressed genes (DEG), which can be used as biomarker-diagnostic-predictive tool for various conditions in prenatal setting. In search of biomarker set for predicting high-risk pregnancies, we performed global expression profiling to find DEG in Ts21. Subsequently, we performed targeted validation and diagnostic performance evaluation on a larger group of case and control samples. Initially, transcriptomic profiles of 10 cultivated amniocyte samples with Ts21 and 9 with normal euploid constitution were determined using expression microarrays. Datasets from Ts21 transcriptomic studies from GEO repository were incorporated. DEG were discovered using linear regression modelling and validated using RT-PCR quantification on an independent sample of 16 cases with Ts21 and 32 controls. The classification performance of Ts21 status based on expression profiling was performed using supervised machine learning algorithm and evaluated using a leave-one-out cross validation approach. Global gene expression profiling has revealed significant expression changes between normal and Ts21 samples, which in combination with data from previously performed Ts21 transcriptomic studies, were used to generate a multi-gene biomarker for Ts21, comprising of 9 gene expression profiles. In addition to biomarker's high performance in discriminating samples from global expression profiling, we were also able to show its discriminatory performance on a larger sample set 2, validated using RT-PCR experiment (AUC=0.97), while its performance on data from previously published studies reached discriminatory AUC values of 1.00. Our results show that transcriptomic changes might potentially be used to discriminate trisomy of chromosome 21 in the prenatal setting. As expressional alterations reflect both, causal and reactive cellular mechanisms, transcriptomic changes may thus have future potential in the diagnosis of a wide array of heterogeneous diseases that result from genetic disturbances.
Common features of microRNA target prediction tools
Peterson, Sarah M.; Thompson, Jeffrey A.; Ufkin, Melanie L.; Sathyanarayana, Pradeep; Liaw, Lucy; Congdon, Clare Bates
2014-01-01
The human genome encodes for over 1800 microRNAs (miRNAs), which are short non-coding RNA molecules that function to regulate gene expression post-transcriptionally. Due to the potential for one miRNA to target multiple gene transcripts, miRNAs are recognized as a major mechanism to regulate gene expression and mRNA translation. Computational prediction of miRNA targets is a critical initial step in identifying miRNA:mRNA target interactions for experimental validation. The available tools for miRNA target prediction encompass a range of different computational approaches, from the modeling of physical interactions to the incorporation of machine learning. This review provides an overview of the major computational approaches to miRNA target prediction. Our discussion highlights three tools for their ease of use, reliance on relatively updated versions of miRBase, and range of capabilities, and these are DIANA-microT-CDS, miRanda-mirSVR, and TargetScan. In comparison across all miRNA target prediction tools, four main aspects of the miRNA:mRNA target interaction emerge as common features on which most target prediction is based: seed match, conservation, free energy, and site accessibility. This review explains these features and identifies how they are incorporated into currently available target prediction tools. MiRNA target prediction is a dynamic field with increasing attention on development of new analysis tools. This review attempts to provide a comprehensive assessment of these tools in a manner that is accessible across disciplines. Understanding the basis of these prediction methodologies will aid in user selection of the appropriate tools and interpretation of the tool output. PMID:24600468
Common features of microRNA target prediction tools.
Peterson, Sarah M; Thompson, Jeffrey A; Ufkin, Melanie L; Sathyanarayana, Pradeep; Liaw, Lucy; Congdon, Clare Bates
2014-01-01
The human genome encodes for over 1800 microRNAs (miRNAs), which are short non-coding RNA molecules that function to regulate gene expression post-transcriptionally. Due to the potential for one miRNA to target multiple gene transcripts, miRNAs are recognized as a major mechanism to regulate gene expression and mRNA translation. Computational prediction of miRNA targets is a critical initial step in identifying miRNA:mRNA target interactions for experimental validation. The available tools for miRNA target prediction encompass a range of different computational approaches, from the modeling of physical interactions to the incorporation of machine learning. This review provides an overview of the major computational approaches to miRNA target prediction. Our discussion highlights three tools for their ease of use, reliance on relatively updated versions of miRBase, and range of capabilities, and these are DIANA-microT-CDS, miRanda-mirSVR, and TargetScan. In comparison across all miRNA target prediction tools, four main aspects of the miRNA:mRNA target interaction emerge as common features on which most target prediction is based: seed match, conservation, free energy, and site accessibility. This review explains these features and identifies how they are incorporated into currently available target prediction tools. MiRNA target prediction is a dynamic field with increasing attention on development of new analysis tools. This review attempts to provide a comprehensive assessment of these tools in a manner that is accessible across disciplines. Understanding the basis of these prediction methodologies will aid in user selection of the appropriate tools and interpretation of the tool output.
Analytical model for force prediction when machining metal matrix composites
NASA Astrophysics Data System (ADS)
Sikder, Snahungshu
Metal Matrix Composites (MMC) offer several thermo-mechanical advantages over standard materials and alloys which make them better candidates in different applications. Their light weight, high stiffness, and strength have attracted several industries such as automotive, aerospace, and defence for their wide range of products. However, the wide spread application of Meal Matrix Composites is still a challenge for industry. The hard and abrasive nature of the reinforcement particles is responsible for rapid tool wear and high machining costs. Fracture and debonding of the abrasive reinforcement particles are the considerable damage modes that directly influence the tool performance. It is very important to find highly effective way to machine MMCs. So, it is important to predict forces when machining Metal Matrix Composites because this will help to choose perfect tools for machining and ultimately save both money and time. This research presents an analytical force model for predicting the forces generated during machining of Metal Matrix Composites. In estimating the generated forces, several aspects of cutting mechanics were considered including: shearing force, ploughing force, and particle fracture force. Chip formation force was obtained by classical orthogonal metal cutting mechanics and the Johnson-Cook Equation. The ploughing force was formulated while the fracture force was calculated from the slip line field theory and the Griffith theory of failure. The predicted results were compared with previously measured data. The results showed very good agreement between the theoretically predicted and experimentally measured cutting forces.
PredicT-ML: a tool for automating machine learning model building with big clinical data.
Luo, Gang
2016-01-01
Predictive modeling is fundamental to transforming large clinical data sets, or "big clinical data," into actionable knowledge for various healthcare applications. Machine learning is a major predictive modeling approach, but two barriers make its use in healthcare challenging. First, a machine learning tool user must choose an algorithm and assign one or more model parameters called hyper-parameters before model training. The algorithm and hyper-parameter values used typically impact model accuracy by over 40 %, but their selection requires many labor-intensive manual iterations that can be difficult even for computer scientists. Second, many clinical attributes are repeatedly recorded over time, requiring temporal aggregation before predictive modeling can be performed. Many labor-intensive manual iterations are required to identify a good pair of aggregation period and operator for each clinical attribute. Both barriers result in time and human resource bottlenecks, and preclude healthcare administrators and researchers from asking a series of what-if questions when probing opportunities to use predictive models to improve outcomes and reduce costs. This paper describes our design of and vision for PredicT-ML (prediction tool using machine learning), a software system that aims to overcome these barriers and automate machine learning model building with big clinical data. The paper presents the detailed design of PredicT-ML. PredicT-ML will open the use of big clinical data to thousands of healthcare administrators and researchers and increase the ability to advance clinical research and improve healthcare.
A New Analysis Tool Assessment for Rotordynamic Modeling of Gas Foil Bearings
NASA Technical Reports Server (NTRS)
Howard, Samuel A.; SanAndres, Luis
2010-01-01
Gas foil bearings offer several advantages over traditional bearing types that make them attractive for use in high-speed turbomachinery. They can operate at very high temperatures, require no lubrication supply (oil pumps, seals, etc.), exhibit very long life with no maintenance, and once operating airborne, have very low power loss. The use of gas foil bearings in high-speed turbomachinery has been accelerating in recent years, although the pace has been slow. One of the contributing factors to the slow growth has been a lack of analysis tools, benchmarked to measurements, to predict gas foil bearing behavior in rotating machinery. To address this shortcoming, NASA Glenn Research Center (GRC) has supported the development of analytical tools to predict gas foil bearing performance. One of the codes has the capability to predict rotordynamic coefficients, power loss, film thickness, structural deformation, and more. The current paper presents an assessment of the predictive capability of the code, named XLGFBTH (Texas A&M University). A test rig at GRC is used as a simulated case study to compare rotordynamic analysis using output from the code to actual rotor response as measured in the test rig. The test rig rotor is supported on two gas foil journal bearings manufactured at GRC, with all pertinent geometry disclosed. The resulting comparison shows that the rotordynamic coefficients calculated using XLGFBTH represent the dynamics of the system reasonably well, especially as they pertain to predicting critical speeds.
Analysis and correlation of the test data from an advanced technology rotor system
NASA Technical Reports Server (NTRS)
Jepson, D.; Moffitt, R.; Hilzinger, K.; Bissell, J.
1983-01-01
Comparisons were made of the performance and blade vibratory loads characteristics for an advanced rotor system as predicted by analysis and as measured in a 1/5 scale model wind tunnel test, a full scale model wind tunnel test and flight test. The accuracy with which the various tools available at the various stages in the design/development process (analysis, model test etc.) could predict final characteristics as measured on the aircraft was determined. The accuracy of the analyses in predicting the effects of systematic tip planform variations investigated in the full scale wind tunnel test was evaluated.
Predictive models in cancer management: A guide for clinicians.
Kazem, Mohammed Ali
2017-04-01
Predictive tools in cancer management are used to predict different outcomes including survival probability or risk of recurrence. The uptake of these tools by clinicians involved in cancer management has not been as common as other clinical tools, which may be due to the complexity of some of these tools or a lack of understanding of how they can aid decision-making in particular clinical situations. The aim of this article is to improve clinicians' knowledge and understanding of predictive tools used in cancer management, including how they are built, how they can be applied to medical practice, and what their limitations may be. Literature review was conducted to investigate the role of predictive tools in cancer management. All predictive models share similar characteristics, but depending on the type of the tool its ability to predict an outcome will differ. Each type has its own pros and cons, and its generalisability will depend on the cohort used to build the tool. These factors will affect the clinician's decision whether to apply the model to their cohort or not. Before a model is used in clinical practice, it is important to appreciate how the model is constructed, what its use may add over and above traditional decision-making tools, and what problems or limitations may be associated with it. Understanding all the above is an important step for any clinician who wants to decide whether or not use predictive tools in their practice. Copyright © 2016 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Durga Prasada Rao, V.; Harsha, N.; Raghu Ram, N. S.; Navya Geethika, V.
2018-02-01
In this work, turning was performed to optimize the surface finish or roughness (Ra) of stainless steel 304 with uncoated and coated carbide tools under dry conditions. The carbide tools were coated with Titanium Aluminium Nitride (TiAlN) nano coating using Physical Vapour Deposition (PVD) method. The machining parameters, viz., cutting speed, depth of cut and feed rate which show major impact on Ra are considered during turning. The experiments are designed as per Taguchi orthogonal array and machining process is done accordingly. Then second-order regression equations have been developed on the basis of experimental results for Ra in terms of machining parameters used. Regarding the effect of machining parameters, an upward trend is observed in Ra with respect to feed rate, and as cutting speed increases the Ra value increased slightly due to chatter and vibrations. The adequacy of response variable (Ra) is tested by conducting additional experiments. The predicted Ra values are found to be a close match of their corresponding experimental values of uncoated and coated tools. The corresponding average % errors are found to be within the acceptable limits. Then the surface roughness equations of uncoated and coated tools are set as the objectives of optimization problem and are solved by using Differential Evolution (DE) algorithm. Also the tool lives of uncoated and coated tools are predicted by using Taylor’s tool life equation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mills, Evan
There exist hundreds of building energy software tools, both web- and disk-based. These tools exhibit considerable range in approach and creativity, with some being highly specialized and others able to consider the building as a whole. However, users are faced with a dizzying array of choices and, often, conflicting results. The fragmentation of development and deployment efforts has hampered tool quality and market penetration. The purpose of this review is to provide information for defining the desired characteristics of residential energy tools, and to encourage future tool development that improves on current practice. This project entails (1) creating a frameworkmore » for describing possible technical and functional characteristics of such tools, (2) mapping existing tools onto this framework, (3) exploring issues of tool accuracy, and (4) identifying ''best practice'' and strategic opportunities for tool design. evaluated 50 web-based residential calculators, 21 of which we regard as ''whole-house'' tools(i.e., covering a range of end uses). Of the whole-house tools, 13 provide open-ended energy calculations, 5 normalize the results to actual costs (a.k.a ''bill-disaggregation tools''), and 3 provide both options. Across the whole-house tools, we found a range of 5 to 58 house-descriptive features (out of 68 identified in our framework) and 2 to 41 analytical and decision-support features (55 possible). We also evaluated 15 disk-based residential calculators, six of which are whole-house tools. Of these tools, 11 provide open-ended calculations, 1 normalizes the results to actual costs, and 3 provide both options. These tools offered ranges of 18 to 58 technical features (70 possible) and 10 to 40 user- and decision-support features (56 possible). The comparison shows that such tools can employ many approaches and levels of detail. Some tools require a relatively small number of well-considered inputs while others ask a myriad of questions and still miss key issues. The value of detail has a lot to do with the type of question(s) being asked by the user (e.g., the availability of dozens of miscellaneous appliances is immaterial for a user attempting to evaluate the potential for space-heating savings by installing a new furnace). More detail does not, according to our evaluation, automatically translate into a ''better'' or ''more accurate'' tool. Efforts to quantify and compare the ''accuracy'' of these tools are difficult at best, and prior tool-comparison studies have not undertaken this in a meaningful way. The ability to evaluate accuracy is inherently limited by the availability of measured data. Furthermore, certain tool outputs can only be measured against ''actual'' values that are themselves calculated (e.g., HVAC sizing), while others are rarely if ever available (e.g., measured energy use or savings for specific measures). Similarly challenging is to understand the sources of inaccuracies. There are many ways in which quantitative errors can occur in tools, ranging from programming errors to problems inherent in a tool's design. Due to hidden assumptions and non-variable ''defaults'', most tools cannot be fully tested across the desirable range of building configurations, operating conditions, weather locations, etc. Many factors conspire to confound performance comparisons among tools. Differences in inputs can range from weather city, to types of HVAC systems, to appliance characteristics, to occupant-driven effects such as thermostat management. Differences in results would thus no doubt emerge from an extensive comparative exercise, but the sources or implications of these differences for the purposes of accuracy evaluation or tool development would remain largely unidentifiable (especially given the paucity of technical documentation available for most tools). For the tools that we tested, the predicted energy bills for a single test building ranged widely (by nearly a factor of three), and far more so at the end-use level. Most tools over-predicted energy bills and all over-predicted consumption. Variability was lower among disk-based tools,but they more significantly over-predicted actual use. The deviations (over-predictions) we observed from actual bills corresponded to up to $1400 per year (approx. 250 percent of the actual bills). For bill-disaggregation tools, wherein the results are forced to equal actual bills, the accuracy issue shifts to whether or not the total is properly attributed to the various end uses and to whether savings calculations are done accurately (a challenge that demands relatively rare end-use data). Here, too, we observed a number of dubious results. Energy savings estimates automatically generated by the web-based tools varied from $46/year (5 percent of predicted use) to $625/year (52 percent of predicted use).« less
GPS-ARM: Computational Analysis of the APC/C Recognition Motif by Predicting D-Boxes and KEN-Boxes
Ren, Jian; Cao, Jun; Zhou, Yanhong; Yang, Qing; Xue, Yu
2012-01-01
Anaphase-promoting complex/cyclosome (APC/C), an E3 ubiquitin ligase incorporated with Cdh1 and/or Cdc20 recognizes and interacts with specific substrates, and faithfully orchestrates the proper cell cycle events by targeting proteins for proteasomal degradation. Experimental identification of APC/C substrates is largely dependent on the discovery of APC/C recognition motifs, e.g., the D-box and KEN-box. Although a number of either stringent or loosely defined motifs proposed, these motif patterns are only of limited use due to their insufficient powers of prediction. We report the development of a novel GPS-ARM software package which is useful for the prediction of D-boxes and KEN-boxes in proteins. Using experimentally identified D-boxes and KEN-boxes as the training data sets, a previously developed GPS (Group-based Prediction System) algorithm was adopted. By extensive evaluation and comparison, the GPS-ARM performance was found to be much better than the one using simple motifs. With this powerful tool, we predicted 4,841 potential D-boxes in 3,832 proteins and 1,632 potential KEN-boxes in 1,403 proteins from H. sapiens, while further statistical analysis suggested that both the D-box and KEN-box proteins are involved in a broad spectrum of biological processes beyond the cell cycle. In addition, with the co-localization information, we predicted hundreds of mitosis-specific APC/C substrates with high confidence. As the first computational tool for the prediction of APC/C-mediated degradation, GPS-ARM is a useful tool for information to be used in further experimental investigations. The GPS-ARM is freely accessible for academic researchers at: http://arm.biocuckoo.org. PMID:22479614
Armijo-Olivo, Susan; Woodhouse, Linda J; Steenstra, Ivan A; Gross, Douglas P
2016-12-01
To determine whether the Disabilities of the Arm, Shoulder, and Hand (DASH) tool added to the predictive ability of established prognostic factors, including patient demographic and clinical outcomes, to predict return to work (RTW) in injured workers with musculoskeletal (MSK) disorders of the upper extremity. A retrospective cohort study using a population-based database from the Workers' Compensation Board of Alberta (WCB-Alberta) that focused on claimants with upper extremity injuries was used. Besides the DASH, potential predictors included demographic, occupational, clinical and health usage variables. Outcome was receipt of compensation benefits after 3 months. To identify RTW predictors, a purposeful logistic modelling strategy was used. A series of receiver operating curve analyses were performed to determine which model provided the best discriminative ability. The sample included 3036 claimants with upper extremity injuries. The final model for predicting RTW included the total DASH score in addition to other established predictors. The area under the curve for this model was 0.77, which is interpreted as fair discrimination. This model was statistically significantly different than the model of established predictors alone (p<0.001). When comparing the DASH total score versus DASH item 23, a non-significant difference was obtained between the models (p=0.34). The DASH tool together with other established predictors significantly helped predict RTW after 3 months in participants with upper extremity MSK disorders. An appealing result for clinicians and busy researchers is that DASH item 23 has equal predictive ability to the total DASH score. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Zheng, Meixun; Bender, Daniel
2018-03-13
Computer-based testing (CBT) has made progress in health sciences education. In 2015, the authors led implementation of a CBT system (ExamSoft) at a dental school in the U.S. Guided by the Technology Acceptance Model (TAM), the purposes of this study were to (a) examine dental students' acceptance of ExamSoft; (b) understand factors impacting acceptance; and (c) evaluate the impact of ExamSoft on students' learning and exam performance. Survey and focus group data revealed that ExamSoft was well accepted by students as a testing tool and acknowledged by most for its potential to support learning. Regression analyses showed that perceived ease of use and perceived usefulness of ExamSoft significantly predicted student acceptance. Prior CBT experience and computer skills did not significantly predict acceptance of ExamSoft. Students reported that ExamSoft promoted learning in the first program year, primarily through timely and rich feedback on examination performance. t-Tests yielded mixed results on whether students performed better on computerized or paper examinations. The study contributes to the literature on CBT and the application of the TAM model in health sciences education. Findings also suggest ways in which health sciences institutions can implement CBT to maximize its potential as an assessment and learning tool.
Clarke, Samuel; Horeczko, Timothy; Carlisle, Matthew; Barton, Joseph D.; Ng, Vivienne; Al-Somali, Sameerah; Bair, Aaron E.
2014-01-01
Background Simulation has been identified as a means of assessing resident physicians’ mastery of technical skills, but there is a lack of evidence for its utility in longitudinal assessments of residents’ non-technical clinical abilities. We evaluated the growth of crisis resource management (CRM) skills in the simulation setting using a validated tool, the Ottawa Crisis Resource Management Global Rating Scale (Ottawa GRS). We hypothesized that the Ottawa GRS would reflect progressive growth of CRM ability throughout residency. Methods Forty-five emergency medicine residents were tracked with annual simulation assessments between 2006 and 2011. We used mixed-methods repeated-measures regression analyses to evaluate elements of the Ottawa GRS by level of training to predict performance growth throughout a 3-year residency. Results Ottawa GRS scores increased over time, and the domains of leadership, problem solving, and resource utilization, in particular, were predictive of overall performance. There was a significant gain in all Ottawa GRS components between postgraduate years 1 and 2, but no significant difference in GRS performance between years 2 and 3. Conclusions In summary, CRM skills are progressive abilities, and simulation is a useful modality for tracking their development. Modification of this tool may be needed to assess advanced learners’ gains in performance. PMID:25499769
Vassallo, James; Beavis, John; Smith, Jason E; Wallis, Lee A
2017-05-01
Triage is a key principle in the effective management at a major incident. There are at least three different triage systems in use worldwide and previous attempts to validate them, have revealed limited sensitivity. Within a civilian adult population, there has been no work to develop an improved system. A retrospective database review of the UK Joint Theatre Trauma Registry was performed for all adult patients (>18years) presenting to a deployed Military Treatment Facility between 2006 and 2013. Patients were defined as Priority One if they had received one or more life-saving interventions from a previously defined list. Using first recorded hospital physiological data (HR/RR/GCS), binary logistic regression models were used to derive optimum physiological ranges to predict need for life-saving intervention. This allowed for the derivation of the Modified Physiological Triage Tool-MPTT (GCS≥14, HR≥100, 12
Data Assimilation and Propagation of Uncertainty in Multiscale Cardiovascular Simulation
NASA Astrophysics Data System (ADS)
Schiavazzi, Daniele; Marsden, Alison
2015-11-01
Cardiovascular modeling is the application of computational tools to predict hemodynamics. State-of-the-art techniques couple a 3D incompressible Navier-Stokes solver with a boundary circulation model and can predict local and peripheral hemodynamics, analyze the post-operative performance of surgical designs and complement clinical data collection minimizing invasive and risky measurement practices. The ability of these tools to make useful predictions is directly related to their accuracy in representing measured physiologies. Tuning of model parameters is therefore a topic of paramount importance and should include clinical data uncertainty, revealing how this uncertainty will affect the predictions. We propose a fully Bayesian, multi-level approach to data assimilation of uncertain clinical data in multiscale circulation models. To reduce the computational cost, we use a stable, condensed approximation of the 3D model build by linear sparse regression of the pressure/flow rate relationship at the outlets. Finally, we consider the problem of non-invasively propagating the uncertainty in model parameters to the resulting hemodynamics and compare Monte Carlo simulation with Stochastic Collocation approaches based on Polynomial or Multi-resolution Chaos expansions.
Streamflow prediction using multi-site rainfall obtained from hydroclimatic teleconnection
NASA Astrophysics Data System (ADS)
Kashid, S. S.; Ghosh, Subimal; Maity, Rajib
2010-12-01
SummarySimultaneous variations in weather and climate over widely separated regions are commonly known as "hydroclimatic teleconnections". Rainfall and runoff patterns, over continents, are found to be significantly teleconnected, with large-scale circulation patterns, through such hydroclimatic teleconnections. Though such teleconnections exist in nature, it is very difficult to model them, due to their inherent complexity. Statistical techniques and Artificial Intelligence (AI) tools gain popularity in modeling hydroclimatic teleconnection, based on their ability, in capturing the complicated relationship between the predictors (e.g. sea surface temperatures) and predictand (e.g., rainfall). Genetic Programming is such an AI tool, which is capable of capturing nonlinear relationship, between predictor and predictand, due to its flexible functional structure. In the present study, gridded multi-site weekly rainfall is predicted from El Niño Southern Oscillation (ENSO) indices, Equatorial Indian Ocean Oscillation (EQUINOO) indices, Outgoing Longwave Radiation (OLR) and lag rainfall at grid points, over the catchment, using Genetic Programming. The predicted rainfall is further used in a Genetic Programming model to predict streamflows. The model is applied for weekly forecasting of streamflow in Mahanadi River, India, and satisfactory performance is observed.
NASA Astrophysics Data System (ADS)
Darmon, David
2018-03-01
In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.
Isolated Open Rotor Noise Prediction Assessment Using the F31A31 Historical Blade Set
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Jones, William T.; Boyd, D. Douglas, Jr.; Zawodny, Nikolas S.
2016-01-01
In an effort to mitigate next-generation fuel efficiency and environmental impact concerns for aviation, open rotor propulsion systems have received renewed interest. However, maintaining the high propulsive efficiency while simultaneously meeting noise goals has been one of the challenges in making open rotor propulsion a viable option. Improvements in prediction tools and design methodologies have opened the design space for next generation open rotor designs that satisfy these challenging objectives. As such, validation of aerodynamic and acoustic prediction tools has been an important aspect of open rotor research efforts. This paper describes validation efforts of a combined computational fluid dynamics and Ffowcs Williams and Hawkings equation methodology for open rotor aeroacoustic modeling. Performance and acoustic predictions were made for a benchmark open rotor blade set and compared with measurements over a range of rotor speeds and observer angles. Overall, the results indicate that the computational approach is acceptable for assessing low-noise open rotor designs. Additionally, this approach may be used to provide realistic incident source fields for acoustic shielding/scattering studies on various aircraft configurations.
van Tongeren, Martie; Lamb, Judith; Cherrie, John W; MacCalman, Laura; Basinas, Ioannis; Hesse, Susanne
2017-10-01
Tier 1 exposure tools recommended for use under REACH are designed to easily identify situations that may pose a risk to health through conservative exposure predictions. However, no comprehensive evaluation of the performance of the lower tier tools has previously been carried out. The ETEAM project aimed to evaluate several lower tier exposure tools (ECETOC TRA, MEASE, and EMKG-EXPO-TOOL) as well as one higher tier tool (STOFFENMANAGER®). This paper describes the results of the external validation of tool estimates using measurement data. Measurement data were collected from a range of providers, both in Europe and United States, together with contextual information. Individual measurement and aggregated measurement data were obtained. The contextual information was coded into the tools to obtain exposure estimates. Results were expressed as percentage of measurements exceeding the tool estimates and presented by exposure category (non-volatile liquid, volatile liquid, metal abrasion, metal processing, and powder handling). We also explored tool performance for different process activities as well as different scenario conditions and exposure levels. In total, results from nearly 4000 measurements were obtained, with the majority for the use of volatile liquids and powder handling. The comparisons of measurement results with tool estimates suggest that the tools are generally conservative. However, the tools were more conservative when estimating exposure from powder handling compared to volatile liquids and other exposure categories. In addition, results suggested that tool performance varies between process activities and scenario conditions. For example, tools were less conservative when estimating exposure during activities involving tabletting, compression, extrusion, pelletisation, granulation (common process activity PROC14) and transfer of substance or mixture (charging and discharging) at non-dedicated facilities (PROC8a; powder handling only). With the exception of STOFFENMANAGER® (for estimating exposure during powder handling), the tools were less conservative for scenarios with lower estimated exposure levels. This is the most comprehensive evaluation of the performance of REACH exposure tools carried out to date. The results show that, although generally conservative, the tools may not always achieve the performance specified in the REACH guidance, i.e. using the 75th or 90th percentile of the exposure distribution for the risk characterisation. Ongoing development, adjustment, and recalibration of the tools with new measurement data are essential to ensure adequate characterisation and control of worker exposure to hazardous substances. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.
Chun, Ting Sie; Malek, M A; Ismail, Amelia Ritahani
2015-01-01
The development of effluent removal prediction is crucial in providing a planning tool necessary for the future development and the construction of a septic sludge treatment plant (SSTP), especially in the developing countries. In order to investigate the expected functionality of the required standard, the prediction of the effluent quality, namely biological oxygen demand, chemical oxygen demand and total suspended solid of an SSTP was modelled using an artificial intelligence approach. In this paper, we adopt the clonal selection algorithm (CSA) to set up a prediction model, with a well-established method - namely the least-square support vector machine (LS-SVM) as a baseline model. The test results of the case study showed that the prediction of the CSA-based SSTP model worked well and provided model performance as satisfactory as the LS-SVM model. The CSA approach shows that fewer control and training parameters are required for model simulation as compared with the LS-SVM approach. The ability of a CSA approach in resolving limited data samples, non-linear sample function and multidimensional pattern recognition makes it a powerful tool in modelling the prediction of effluent removals in an SSTP.
Kostal, Jakub; Voutchkova-Kostal, Adelina
2016-01-19
Using computer models to accurately predict toxicity outcomes is considered to be a major challenge. However, state-of-the-art computational chemistry techniques can now be incorporated in predictive models, supported by advances in mechanistic toxicology and the exponential growth of computing resources witnessed over the past decade. The CADRE (Computer-Aided Discovery and REdesign) platform relies on quantum-mechanical modeling of molecular interactions that represent key biochemical triggers in toxicity pathways. Here, we present an external validation exercise for CADRE-SS, a variant developed to predict the skin sensitization potential of commercial chemicals. CADRE-SS is a hybrid model that evaluates skin permeability using Monte Carlo simulations, assigns reactive centers in a molecule and possible biotransformations via expert rules, and determines reactivity with skin proteins via quantum-mechanical modeling. The results were promising with an overall very good concordance of 93% between experimental and predicted values. Comparison to performance metrics yielded by other tools available for this endpoint suggests that CADRE-SS offers distinct advantages for first-round screenings of chemicals and could be used as an in silico alternative to animal tests where permissible by legislative programs.
Lamberink, Herm J; Boshuisen, Kim; Otte, Willem M; Geleijns, Karin; Braun, Kees P J
2018-03-01
The objective of this study was to create a clinically useful tool for individualized prediction of seizure outcomes following antiepileptic drug withdrawal after pediatric epilepsy surgery. We used data from the European retrospective TimeToStop study, which included 766 children from 15 centers, to perform a proportional hazard regression analysis. The 2 outcome measures were seizure recurrence and seizure freedom in the last year of follow-up. Prognostic factors were identified through systematic review of the literature. The strongest predictors for each outcome were selected through backward selection, after which nomograms were created. The final models included 3 to 5 factors per model. Discrimination in terms of adjusted concordance statistic was 0.68 (95% confidence interval [CI] 0.67-0.69) for predicting seizure recurrence and 0.73 (95% CI 0.72-0.75) for predicting eventual seizure freedom. An online prediction tool is provided on www.epilepsypredictiontools.info/ttswithdrawal. The presented models can improve counseling of patients and parents regarding postoperative antiepileptic drug policies, by estimating individualized risks of seizure recurrence and eventual outcome. Wiley Periodicals, Inc. © 2018 International League Against Epilepsy.
2-D Circulation Control Airfoil Benchmark Experiments Intended for CFD Code Validation
NASA Technical Reports Server (NTRS)
Englar, Robert J.; Jones, Gregory S.; Allan, Brian G.; Lin, Johb C.
2009-01-01
A current NASA Research Announcement (NRA) project being conducted by Georgia Tech Research Institute (GTRI) personnel and NASA collaborators includes the development of Circulation Control (CC) blown airfoils to improve subsonic aircraft high-lift and cruise performance. The emphasis of this program is the development of CC active flow control concepts for both high-lift augmentation, drag control, and cruise efficiency. A collaboration in this project includes work by NASA research engineers, whereas CFD validation and flow physics experimental research are part of NASA s systematic approach to developing design and optimization tools for CC applications to fixed-wing aircraft. The design space for CESTOL type aircraft is focusing on geometries that depend on advanced flow control technologies that include Circulation Control aerodynamics. The ability to consistently predict advanced aircraft performance requires improvements in design tools to include these advanced concepts. Validation of these tools will be based on experimental methods applied to complex flows that go beyond conventional aircraft modeling techniques. This paper focuses on recent/ongoing benchmark high-lift experiments and CFD efforts intended to provide 2-D CFD validation data sets related to NASA s Cruise Efficient Short Take Off and Landing (CESTOL) study. Both the experimental data and related CFD predictions are discussed.
A new tool called DISSECT for analysing large genomic data sets using a Big Data approach
Canela-Xandri, Oriol; Law, Andy; Gray, Alan; Woolliams, John A.; Tenesa, Albert
2015-01-01
Large-scale genetic and genomic data are increasingly available and the major bottleneck in their analysis is a lack of sufficiently scalable computational tools. To address this problem in the context of complex traits analysis, we present DISSECT. DISSECT is a new and freely available software that is able to exploit the distributed-memory parallel computational architectures of compute clusters, to perform a wide range of genomic and epidemiologic analyses, which currently can only be carried out on reduced sample sizes or under restricted conditions. We demonstrate the usefulness of our new tool by addressing the challenge of predicting phenotypes from genotype data in human populations using mixed-linear model analysis. We analyse simulated traits from 470,000 individuals genotyped for 590,004 SNPs in ∼4 h using the combined computational power of 8,400 processor cores. We find that prediction accuracies in excess of 80% of the theoretical maximum could be achieved with large sample sizes. PMID:26657010
Collecting the chemical structures and data for necessary QSAR modeling is facilitated by available public databases and open data. However, QSAR model performance is dependent on the quality of data and modeling methodology used. This study developed robust QSAR models for physi...
Predictive Heterosis in Multibreed Evaluations Using Quantitative and Molecular Approaches
USDA-ARS?s Scientific Manuscript database
Heterosis is the extra genetic boost in performance obtained by crossing two cattle breeds. It is an important tool for increasing the efficiency of beef production. It is also important to adjust data used to calculate genetic evaluations for differences in heterosis. Good estimates of heterosis...
Graduate Student Project: Operations Management Product Plan
ERIC Educational Resources Information Center
Fish, Lynn
2007-01-01
An operations management product project is an effective instructional technique that fills a void in current operations management literature in product planning. More than 94.1% of 286 graduates favored the project as a learning tool, and results demonstrate the significant impact the project had in predicting student performance. The author…
Evaluating the mitigation of greenhouse gas emissions and adaptation in dairy production.
USDA-ARS?s Scientific Manuscript database
Process-level modeling at the farm scale provides a tool for evaluating strategies for both mitigating greenhouse gas emissions and adapting to climate change. The Integrated Farm System Model (IFSM) simulates representative crop, beef or dairy farms over many years of weather to predict performance...
The collection of chemical structures and associated experimental data for QSAR modeling is facilitated by the increasing number and size of public databases. However, the performance of QSAR models highly depends on the quality of the data used and the modeling methodology. The ...
Loran-C time difference calculations
NASA Technical Reports Server (NTRS)
Fischer, J. P.
1978-01-01
Some of the simpler mathematical equations which may be used in Loran-C navigation calculations were examined. A technique is presented to allow Loran-C time differences to be predicted at a location. This is useful for receiver performance work, and a tool for more complex calculations, such as position fixing.
NASA Technical Reports Server (NTRS)
Smith, Mark S.; Bui, Trong T.; Garcia, Christian A.; Cumming, Stephen B.
2016-01-01
A pair of compliant trailing edge flaps was flown on a modified GIII airplane. Prior to flight test, multiple analysis tools of various levels of complexity were used to predict the aerodynamic effects of the flaps. Vortex lattice, full potential flow, and full Navier-Stokes aerodynamic analysis software programs were used for prediction, in addition to another program that used empirical data. After the flight-test series, lift and pitching moment coefficient increments due to the flaps were estimated from flight data and compared to the results of the predictive tools. The predicted lift increments matched flight data well for all predictive tools for small flap deflections. All tools over-predicted lift increments for large flap deflections. The potential flow and Navier-Stokes programs predicted pitching moment coefficient increments better than the other tools.
Remote sensing of rainfall for flash flood prediction in the United States
NASA Astrophysics Data System (ADS)
Gourley, J. J.; Flamig, Z.; Vergara, H. J.; Clark, R. A.; Kirstetter, P.; Terti, G.; Hong, Y.; Howard, K.
2015-12-01
This presentation will briefly describe the Multi-Radar Multi-Sensor (MRMS) system that ingests all NEXRAD and Canadian weather radar data and produces accurate rainfall estimates at 1-km resolution every 2 min. This real-time system, which was recently transitioned for operational use in the National Weather Service, provides forcing to a suite of flash flood prediction tools. The Flooded Locations and Simulated Hydrographs (FLASH) project provides 6-hr forecasts of impending flash flooding across the US at the same 1-km grid cell resolution as the MRMS rainfall forcing. This presentation will describe the ensemble hydrologic modeling framework, provide an evaluation at gauged basins over a 10-year period, and show the FLASH tools' performance during the record-setting floods in Oklahoma and Texas in May and June 2015.
Shao, Q; Rowe, R C; York, P
2007-06-01
This study has investigated an artificial intelligence technology - model trees - as a modelling tool applied to an immediate release tablet formulation database. The modelling performance was compared with artificial neural networks that have been well established and widely applied in the pharmaceutical product formulation fields. The predictability of generated models was validated on unseen data and judged by correlation coefficient R(2). Output from the model tree analyses produced multivariate linear equations which predicted tablet tensile strength, disintegration time, and drug dissolution profiles of similar quality to neural network models. However, additional and valuable knowledge hidden in the formulation database was extracted from these equations. It is concluded that, as a transparent technology, model trees are useful tools to formulators.
2016-01-01
Recent studies of children's tool innovation have revealed that there is variation in children's success in middle-childhood. In two individual differences studies, we sought to identify personal characteristics that might predict success on an innovation task. In Study 1, we found that although measures of divergent thinking were related to each other they did not predict innovation success. In Study 2, we measured executive functioning including: inhibition, working memory, attentional flexibility and ill-structured problem-solving. None of these measures predicted innovation, but, innovation was predicted by children's performance on a receptive vocabulary scale that may function as a proxy for general intelligence. We did not find evidence that children's innovation was predicted by specific personal characteristics. PMID:26926280
Review on failure prediction techniques of composite single lap joint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ab Ghani, A.F., E-mail: ahmadfuad@utem.edu.my; Rivai, Ahmad, E-mail: ahmadrivai@utem.edu.my
2016-03-29
Adhesive bonding is the most appropriate joining method in construction of composite structures. The use of reliable design and prediction technique will produce better performance of bonded joints. Several papers from recent papers and journals have been reviewed and synthesized to understand the current state of the art in this area. It is done by studying the most relevant analytical solutions for composite adherends with start of reviewing the most fundamental ones involving beam/plate theory. It is then extended to review single lap joint non linearity and failure prediction and finally on the failure prediction on composite single lap joint.more » The review also encompasses the finite element modelling part as tool to predict the elastic response of composite single lap joint and failure prediction numerically.« less
Single-pass memory system evaluation for multiprogramming workloads
NASA Technical Reports Server (NTRS)
Conte, Thomas M.; Hwu, Wen-Mei W.
1990-01-01
Modern memory systems are composed of levels of cache memories, a virtual memory system, and a backing store. Varying more than a few design parameters and measuring the performance of such systems has traditionally be constrained by the high cost of simulation. Models of cache performance recently introduced reduce the cost simulation but at the expense of accuracy of performance prediction. Stack-based methods predict performance accurately using one pass over the trace for all cache sizes, but these techniques have been limited to fully-associative organizations. This paper presents a stack-based method of evaluating the performance of cache memories using a recurrence/conflict model for the miss ratio. Unlike previous work, the performance of realistic cache designs, such as direct-mapped caches, are predicted by the method. The method also includes a new approach to the problem of the effects of multiprogramming. This new technique separates the characteristics of the individual program from that of the workload. The recurrence/conflict method is shown to be practical, general, and powerful by comparing its performance to that of a popular traditional cache simulator. The authors expect that the availability of such a tool will have a large impact on future architectural studies of memory systems.
Sousa, Bruno
2013-01-01
Objective To translate into Portuguese and evaluate the measuring properties of the Sunderland Scale and the Cubbin & Jackson Revised Scale, which are instruments for evaluating the risk of developing pressure ulcers during intensive care. Methods This study included the process of translation and adaptation of the scales to the Portuguese language, as well as the validation of these tools. To assess the reliability, Cronbach alpha values of 0.702 to 0.708 were identified for the Sunderland Scale and the Cubbin & Jackson Revised Scale, respectively. The validation criteria (predictive) were performed comparatively with the Braden Scale (gold standard), and the main measurements evaluated were sensitivity, specificity, positive predictive value, negative predictive value, and area under the curve, which were calculated based on cutoff points. Results The Sunderland Scale exhibited 60% sensitivity, 86.7% specificity, 47.4% positive predictive value, 91.5% negative predictive value, and 0.86 for the area under the curve. The Cubbin & Jackson Revised Scale exhibited 73.3% sensitivity, 86.7% specificity, 52.4% positive predictive value, 94.2% negative predictive value, and 0.91 for the area under the curve. The Braden scale exhibited 100% sensitivity, 5.3% specificity, 17.4% positive predictive value, 100% negative predictive value, and 0.72 for the area under the curve. Conclusions Both tools demonstrated reliability and validity for this sample. The Cubbin & Jackson Revised Scale yielded better predictive values for the development of pressure ulcers during intensive care. PMID:23917975
Hansen, Bjoern Oest; Meyer, Etienne H; Ferrari, Camilla; Vaid, Neha; Movahedi, Sara; Vandepoele, Klaas; Nikoloski, Zoran; Mutwil, Marek
2018-03-01
Recent advances in gene function prediction rely on ensemble approaches that integrate results from multiple inference methods to produce superior predictions. Yet, these developments remain largely unexplored in plants. We have explored and compared two methods to integrate 10 gene co-function networks for Arabidopsis thaliana and demonstrate how the integration of these networks produces more accurate gene function predictions for a larger fraction of genes with unknown function. These predictions were used to identify genes involved in mitochondrial complex I formation, and for five of them, we confirmed the predictions experimentally. The ensemble predictions are provided as a user-friendly online database, EnsembleNet. The methods presented here demonstrate that ensemble gene function prediction is a powerful method to boost prediction performance, whereas the EnsembleNet database provides a cutting-edge community tool to guide experimentalists. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
Ren, Yanping; Yang, Hui; Browning, Colette; Thomas, Shane; Liu, Meiyan
2015-03-01
Eligible studies published before 31 Dec 2013 were identified from the following databases: Ovid Medline, EMBASE, PsycINFO, Scopus, Cochrane Library, CINAHL Plus, and Web of Science. Eligible studies published before 31, Dec 2013 were identified from the following databases: Ovid Medline, EMBASE, psycINFO, Scopus, Cochrane Library, CINAHL Plus, and Web of Science. Eight studies aiming to identify MDD in CHD patients were included, and there were 10 self-reporting questionnaires (such as PHQ-2, PHQ-9, PHQ categorical algorithm, HADS-D, BDI, BDI-II, BDI-II-cog, CES-D, SCL-90, 2 simple yes/no items) and 1 observer rating scale (Ham-D). For MDD alone, the sensitivity and specificity of various screening tools at the validity and optimal cut-off point varied from 0.34 [0.19, 0.52] to 0.96 [0.78, 1.00] and 0.69 [0.65, 0.73] to 0.97 [0.93, 0.99]. Results showed PHQ-9 (≥10), BDI-II (³14 or ≥16), and HADS-D (≥5 or ≥4) were widely used for screening MDD in CHD patients. There is no consensus on the optimal screening tool for MDD in CHD patients. When evaluating the performance of a screening tool, balancing the high sensitivity and negative predictive value (NPV) between specificity and positive predictive value (PPV) for screening or diagnostic purpose should be considered. After screening, further diagnosis, appropriate management, and necessary referral may also improve cardiovascular outcomes.
Development of the Surface Management System Integrated with CTAS Arrival Tools
NASA Technical Reports Server (NTRS)
Jung, Yoon C.; Jara, Dave
2005-01-01
The Surface Management System (SMS) developed by NASA Ames Research Center in coordination with the Federal Aviation Administration (FAA) is a decision support tool to help tower traffic coordinators and Ground/Local controllers in managing and controlling airport surface traffic in order to increase capacity, efficiency, and flexibility. SMS provides common situation awareness to personnel at various air traffic control facilities such as airport traffic control towers (ATCT s), airline ramp towers, Terminal Radar Approach Control (TRACON), and Air Route Traffic Control Center (ARTCC). SMS also provides a traffic management tool to assist ATCT traffic management coordinators (TMCs) in making decisions such as airport configuration and runway load balancing. The Build 1 of the SMS tool was installed and successfully tested at Memphis International Airport (MEM) and received high acceptance scores from ATCT controllers and coordinators, as well as airline ramp controllers. NASA Ames Research Center continues to develop SMS under NASA s Strategic Airspace Usage (SAU) project in order to improve its prediction accuracy and robustness under various modeling uncertainties. This paper reports the recent development effort performed by the NASA Ames Research Center: 1) integration of Center TRACON Automation System (CTAS) capability with SMS and 2) an alternative approach to obtain airline gate information through a publicly available website. The preliminary analysis results performed on the air/surface traffic data at the DFW airport have shown significant improvement in predicting airport arrival demand and IN time at the gate. This paper concludes with recommendations for future research and development.
"Chair Stand Test" as Simple Tool for Sarcopenia Screening in Elderly Women.
Pinheiro, P A; Carneiro, J A O; Coqueiro, R S; Pereira, R; Fernandes, M H
2016-01-01
To investigate the association between sarcopenia and "chair stand test" performance, and evaluate this test as a screening tool for sarcopenia in community-dwelling elderly women. Cross-sectional Survey. 173 female individuals, aged ≥ 60 years and living in the urban area of the municipality of Lafaiete Coutinho, Bahia's inland, Brazil. The association between sarcopenia (defined by muscle mass, strength and/or performance loss) and performance in the "chair stand test" was tested by binary logistic regression technique. The ROC curve parameters were used to evaluate the diagnostic power of the test in sarcopenia screening. The significance level was set at 5 %. The model showed that the time spent for the "chair stand test" was positively associated (OR = 1.08; 95% CI = 1.01 - 1.16, p = 0.024) to sarcopenia, indicating that, for each 1 second increment in the test performance, the sarcopenia's probability increased by 8% in elderly women. The cut-off point that showed the best balance between sensitivity and specificity was 13 seconds. The performance of "chair stand test" showed predictive ability for sarcopenia, being an effective and simple screening tool for sarcopenia in elderly women. This test could be used for screening sarcopenic elderly women, allowing early interventions.
Zorn, Kevin C; Capitanio, Umberto; Jeldres, Claudio; Arjane, Philippe; Perrotte, Paul; Shariat, Shahrokh F; Lee, David I; Shalhav, Arieh L; Zagaja, Gregory P; Shikanov, Sergey A; Gofrit, Ofer N; Thong, Alan E; Albala, David M; Sun, Leon; Karakiewicz, Pierre I
2009-04-01
The Partin tables represent one of the most widely used prostate cancer staging tools for seminal vesicle invasion (SVI) prediction. Recently, Gallina et al. reported a novel staging tool for the prediction of SVI that further incorporated the use of the percentage of positive biopsy cores. We performed an external validation of the Gallina et al. nomogram and the 2007 Partin tables in a large, multi-institutional North American cohort of men treated with robotic-assisted radical prostatectomy. Clinical and pathologic data were prospectively gathered from 2,606 patients treated with robotic-assisted radical prostatectomy at one of four North American robotic referral centers between 2002 and 2007. Discrimination was quantified with the area under the receiver operating characteristics curve. The calibration compared the predicted and observed SVI rates throughout the entire range of predictions. At robotic-assisted radical prostatectomy, SVI was recorded in 4.2% of patients. The discriminant properties of the Gallina et al. nomogram resulted in 81% accuracy compared with 78% for the 2007 Partin tables. The Gallina et al. nomogram overestimated the true rate of SVI. Conversely, the Partin tables underestimated the true rate of SVI. The Gallina et al. nomogram offers greater accuracy (81%) than the 2007 Partin tables (78%). However, both tools are associated with calibration limitations that need to be acknowledged and considered before their implementation into clinical practice.
Codner, Pablo; Malick, Waqas; Kouz, Remi; Patel, Amisha; Chen, Cheng-Han; Terre, Juan; Landes, Uri; Vahl, Torsten Peter; George, Isaac; Nazif, Tamim; Kirtane, Ajay J; Khalique, Omar K; Hahn, Rebecca T; Leon, Martin B; Kodali, Susheel
2018-05-08
Risk assessment tools currently used to predict mortality in transcatheter aortic valve implantation (TAVI) were designed for patients undergoing cardiac surgery. We aim to assess the accuracy of the TAVI dedicated American College of Cardiology / Transcatheter Valve Therapies (ACC/TVT) risk score in predicting mortality outcomes. Consecutive patients (n=1038) undergoing TAVI at a single institution from 2014 to 2016 were included. The ACC/TVT registry mortality risk score, the Society of Thoracic Surgeons - Patient Reported Outcomes (STS-PROM) score and the EuroSCORE II were calculated for all patients. In hospital and 30-day all-cause mortality rates were 1.3% and 2.9%, respectively. The ACC/TVT risk stratification tool scored higher for patients who died in-hospital than in those who survived the index hospitalization (6.4 ± 4.6 vs. 3.5 ± 1.6, p = 0.03; respectively). The ACC/TVT score showed a high level of discrimination, C-index for in-hospital mortality 0.74, 95% CI [0.59 - 0.88]. There were no significant differences between the performance of the ACC/TVT registry risk score, the EuroSCORE II and the STS-PROM for in hospital and 30-day mortality rates. The ACC/TVT registry risk model is a dedicated tool to aid in the prediction of in-hospital mortality risk after TAVI.
Kamath, Ganesh; Baker, Gary A
2012-06-14
Free energies for graphene exfoliation from bilayer graphene using ionic liquids based on various cations paired with the bis(trifluoromethylsulfonyl)imide anion were determined from adaptive bias force-molecular dynamics (ABF-MD) simulation and fall in excellent qualitative agreement with experiment. This method has notable potential as an a priori screening tool for performance based rank order prediction of novel ionic liquids for the dispersion and exfoliation of various nanocarbons and inorganic graphene analogues.
Olives, Casey; Pagano, Marcello
2013-01-01
Background Lot Quality Assurance Sampling (LQAS) is a provably useful tool for monitoring health programmes. Although LQAS ensures acceptable Producer and Consumer risks, the literature alleges that the method suffers from poor specificity and positive predictive values (PPVs). We suggest that poor LQAS performance is due, in part, to variation in the true underlying distribution. However, until now the role of the underlying distribution in expected performance has not been adequately examined. Methods We present Bayesian-LQAS (B-LQAS), an approach to incorporating prior information into the choice of the LQAS sample size and decision rule, and explore its properties through a numerical study. Additionally, we analyse vaccination coverage data from UNICEF’s State of the World’s Children in 1968–1989 and 2008 to exemplify the performance of LQAS and B-LQAS. Results Results of our numerical study show that the choice of LQAS sample size and decision rule is sensitive to the distribution of prior information, as well as to individual beliefs about the importance of correct classification. Application of the B-LQAS approach to the UNICEF data improves specificity and PPV in both time periods (1968–1989 and 2008) with minimal reductions in sensitivity and negative predictive value. Conclusions LQAS is shown to be a robust tool that is not necessarily prone to poor specificity and PPV as previously alleged. In situations where prior or historical data are available, B-LQAS can lead to improvements in expected performance. PMID:23378151
The effects of shared information on semantic calculations in the gene ontology.
Bible, Paul W; Sun, Hong-Wei; Morasso, Maria I; Loganantharaj, Rasiah; Wei, Lai
2017-01-01
The structured vocabulary that describes gene function, the gene ontology (GO), serves as a powerful tool in biological research. One application of GO in computational biology calculates semantic similarity between two concepts to make inferences about the functional similarity of genes. A class of term similarity algorithms explicitly calculates the shared information (SI) between concepts then substitutes this calculation into traditional term similarity measures such as Resnik, Lin, and Jiang-Conrath. Alternative SI approaches, when combined with ontology choice and term similarity type, lead to many gene-to-gene similarity measures. No thorough investigation has been made into the behavior, complexity, and performance of semantic methods derived from distinct SI approaches. We apply bootstrapping to compare the generalized performance of 57 gene-to-gene semantic measures across six benchmarks. Considering the number of measures, we additionally evaluate whether these methods can be leveraged through ensemble machine learning to improve prediction performance. Results showed that the choice of ontology type most strongly influenced performance across all evaluations. Combining measures into an ensemble classifier reduces cross-validation error beyond any individual measure for protein interaction prediction. This improvement resulted from information gained through the combination of ontology types as ensemble methods within each GO type offered no improvement. These results demonstrate that multiple SI measures can be leveraged for machine learning tasks such as automated gene function prediction by incorporating methods from across the ontologies. To facilitate future research in this area, we developed the GO Graph Tool Kit (GGTK), an open source C++ library with Python interface (github.com/paulbible/ggtk).
Effinger, Angela; O'Driscoll, Caitriona M; McAllister, Mark; Fotaki, Nikoletta
2018-05-16
Drug product performance in patients with gastrointestinal (GI) diseases can be altered compared to healthy subjects due to pathophysiological changes. In this review, relevant differences in patients with inflammatory bowel diseases, coeliac disease, irritable bowel syndrome and short bowel syndrome are discussed and possible in vitro and in silico tools to predict drug product performance in this patient population are assessed. Drug product performance was altered in patients with GI diseases compared to healthy subjects, as assessed in a limited number of studies for some drugs. Underlying causes can be observed pathophysiological alterations such as the differences in GI transit time, the composition of the GI fluids and GI permeability. Additionally, alterations in the abundance of metabolising enzymes and transporter systems were observed. The effect of the GI diseases on each parameter is not always evident as it may depend on the location and the state of the disease. The impact of the pathophysiological change on drug bioavailability depends on the physicochemical characteristics of the drug, the pharmaceutical formulation and drug metabolism. In vitro and in silico methods to predict drug product performance in patients with GI diseases are currently limited but could be a useful tool to improve drug therapy. Development of suitable in vitro dissolution and in silico models for patients with GI diseases can improve their drug therapy. The likeliness of the models to provide accurate predictions depends on the knowledge of pathophysiological alterations, and thus, further assessment of physiological differences is essential. © 2018 Royal Pharmaceutical Society.
Moisen, Gretchen G.; Freeman, E.A.; Blackard, J.A.; Frescino, T.S.; Zimmermann, N.E.; Edwards, T.C.
2006-01-01
Many efforts are underway to produce broad-scale forest attribute maps by modelling forest class and structure variables collected in forest inventories as functions of satellite-based and biophysical information. Typically, variants of classification and regression trees implemented in Rulequest's?? See5 and Cubist (for binary and continuous responses, respectively) are the tools of choice in many of these applications. These tools are widely used in large remote sensing applications, but are not easily interpretable, do not have ties with survey estimation methods, and use proprietary unpublished algorithms. Consequently, three alternative modelling techniques were compared for mapping presence and basal area of 13 species located in the mountain ranges of Utah, USA. The modelling techniques compared included the widely used See5/Cubist, generalized additive models (GAMs), and stochastic gradient boosting (SGB). Model performance was evaluated using independent test data sets. Evaluation criteria for mapping species presence included specificity, sensitivity, Kappa, and area under the curve (AUC). Evaluation criteria for the continuous basal area variables included correlation and relative mean squared error. For predicting species presence (setting thresholds to maximize Kappa), SGB had higher values for the majority of the species for specificity and Kappa, while GAMs had higher values for the majority of the species for sensitivity. In evaluating resultant AUC values, GAM and/or SGB models had significantly better results than the See5 models where significant differences could be detected between models. For nine out of 13 species, basal area prediction results for all modelling techniques were poor (correlations less than 0.5 and relative mean squared errors greater than 0.8), but SGB provided the most stable predictions in these instances. SGB and Cubist performed equally well for modelling basal area for three species with moderate prediction success, while all three modelling tools produced comparably good predictions (correlation of 0.68 and relative mean squared error of 0.56) for one species. ?? 2006 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wright, David; Thyer, Mark; Westra, Seth
2015-04-01
Highly influential data points are those that have a disproportionately large impact on model performance, parameters and predictions. However, in current hydrological modelling practice the relative influence of individual data points on hydrological model calibration is not commonly evaluated. This presentation illustrates and evaluates several influence diagnostics tools that hydrological modellers can use to assess the relative influence of data. The feasibility and importance of including influence detection diagnostics as a standard tool in hydrological model calibration is discussed. Two classes of influence diagnostics are evaluated: (1) computationally demanding numerical "case deletion" diagnostics; and (2) computationally efficient analytical diagnostics, based on Cook's distance. These diagnostics are compared against hydrologically orientated diagnostics that describe changes in the model parameters (measured through the Mahalanobis distance), performance (objective function displacement) and predictions (mean and maximum streamflow). These influence diagnostics are applied to two case studies: a stage/discharge rating curve model, and a conceptual rainfall-runoff model (GR4J). Removing a single data point from the calibration resulted in differences to mean flow predictions of up to 6% for the rating curve model, and differences to mean and maximum flow predictions of up to 10% and 17%, respectively, for the hydrological model. When using the Nash-Sutcliffe efficiency in calibration, the computationally cheaper Cook's distance metrics produce similar results to the case-deletion metrics at a fraction of the computational cost. However, Cooks distance is adapted from linear regression with inherit assumptions on the data and is therefore less flexible than case deletion. Influential point detection diagnostics show great potential to improve current hydrological modelling practices by identifying highly influential data points. The findings of this study establish the feasibility and importance of including influential point detection diagnostics as a standard tool in hydrological model calibration. They provide the hydrologist with important information on whether model calibration is susceptible to a small number of highly influent data points. This enables the hydrologist to make a more informed decision of whether to (1) remove/retain the calibration data; (2) adjust the calibration strategy and/or hydrological model to reduce the susceptibility of model predictions to a small number of influential observations.
Kastoer, Chloé; Dieltjens, Marijke; Oorts, Eline; Hamans, Evert; Braem, Marc J.; Van de Heyning, Paul H.; Vanderveken, Olivier M.
2016-01-01
Study Objectives: To perform a review of the current evidence regarding the use of a remotely controlled mandibular positioner (RCMP) and to analyze the efficacy of RCMP as a predictive selection tool in the treatment of obstructive sleep apnea (OSA) with oral appliances that protrude the mandible (OAm), exclusively relying on single-night RCMP titration. Methods: An extensive literature search is performed through PubMed.com, Thecochranelibrary.com (CENTRAL only), Embase.com, and recent conference meeting abstracts in the field. Results: A total of 254 OSA patients from four full-text articles and 5 conference meeting abstracts contribute data to the review. Criteria for successful RCMP test and success with OAm differed between studies. Study populations were not fully comparable due to range-difference in baseline apneahypopnea index (AHI). However, in all studies elimination of airway obstruction events during sleep by RCMP titration predicted OAm therapy success by the determination of the most effective target protrusive position (ETPP). A statistically significant association is found between mean AHI predicted outcome with RCMP and treatment outcome with OAm on polysomnographic or portable sleep monitoring evaluation (p < 0.05). Conclusions: The existing evidence regarding the use of RCMP in patients with OSA indicates that it might be possible to protrude the mandible progressively during sleep under poly(somno)graphic observation by RCMP until respiratory events are eliminated without disturbing sleep or arousing the patient. ETPP as measured by the use of RCMP was significantly associated with success of OAm therapy in the reported studies. RCMP might be a promising instrument for predicting OAm treatment outcome and targeting the degree of mandibular advancement needed. Citation: Kastoer C, Dieltjens M, Oorts E, Hamans E, Braem MJ, Van de Heyning PH, Vanderveken OM. The use of remotely controlled mandibular positioner as a predictive screening tool for mandibular advancement device therapy in patients with obstructive sleep apnea through single-night progressive titration of the mandible: a systematic review. J Clin Sleep Med 2016;12(10):1411–1421. PMID:27568892
Ligand Binding Site Detection by Local Structure Alignment and Its Performance Complementarity
Lee, Hui Sun; Im, Wonpil
2013-01-01
Accurate determination of potential ligand binding sites (BS) is a key step for protein function characterization and structure-based drug design. Despite promising results of template-based BS prediction methods using global structure alignment (GSA), there is a room to improve the performance by properly incorporating local structure alignment (LSA) because BS are local structures and often similar for proteins with dissimilar global folds. We present a template-based ligand BS prediction method using G-LoSA, our LSA tool. A large benchmark set validation shows that G-LoSA predicts drug-like ligands’ positions in single-chain protein targets more precisely than TM-align, a GSA-based method, while the overall success rate of TM-align is better. G-LoSA is particularly efficient for accurate detection of local structures conserved across proteins with diverse global topologies. Recognizing the performance complementarity of G-LoSA to TM-align and a non-template geometry-based method, fpocket, a robust consensus scoring method, CMCS-BSP (Complementary Methods and Consensus Scoring for ligand Binding Site Prediction), is developed and shows improvement on prediction accuracy. The G-LoSA source code is freely available at http://im.bioinformatics.ku.edu/GLoSA. PMID:23957286
Contextual predictability enhances reading performance in patients with schizophrenia.
Fernández, Gerardo; Guinjoan, Salvador; Sapognikoff, Marcelo; Orozco, David; Agamennoni, Osvaldo
2016-07-30
In the present work we analyzed fixation duration in 40 healthy individuals and 18 patients with chronic, stable SZ during reading of regular sentences and proverbs. While they read, their eye movements were recorded. We used lineal mixed models to analyze fixation durations. The predictability of words N-1, N, and N+1 exerted a strong influence on controls and SZ patients. The influence of the predictabilities of preceding, current, and upcoming words on SZ was clearly reduced for proverbs in comparison to regular sentences. Both controls and SZ readers were able to use highly predictable fixated words for an easier reading. Our results suggest that SZ readers might compensate attentional and working memory deficiencies by using stored information of familiar texts for enhancing their reading performance. The predictabilities of words in proverbs serve as task-appropriate cues that are used by SZ readers. To the best of our knowledge, this is the first study using eyetracking for measuring how patients with SZ process well-defined words embedded in regular sentences and proverbs. Evaluation of the resulting changes in fixation durations might provide a useful tool for understanding how SZ patients could enhance their reading performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Souza, João Paulo; Oladapo, Olufemi T; Bohren, Meghan A; Mugerwa, Kidza; Fawole, Bukola; Moscovici, Leonardo; Alves, Domingos; Perdona, Gleici; Oliveira-Ciabati, Livia; Vogel, Joshua P; Tunçalp, Özge; Zhang, Jim; Hofmeyr, Justus; Bahl, Rajiv; Gülmezoglu, A Metin
2015-05-26
The partograph is currently the main tool available to support decision-making of health professionals during labour. However, the rate of appropriate use of the partograph is disappointingly low. Apart from limitations that are associated with partograph use, evidence of positive impact on labour-related health outcomes is lacking. The main goal of this study is to develop a Simplified, Effective, Labour Monitoring-to-Action (SELMA) tool. The primary objectives are: to identify the essential elements of intrapartum monitoring that trigger the decision to use interventions aimed at preventing poor labour outcomes; to develop a simplified, monitoring-to-action algorithm for labour management; and to compare the diagnostic performance of SELMA and partograph algorithms as tools to identify women who are likely to develop poor labour-related outcomes. A prospective cohort study will be conducted in eight health facilities in Nigeria and Uganda (four facilities from each country). All women admitted for vaginal birth will comprise the study population (estimated sample size: 7,812 women). Data will be collected on maternal characteristics on admission, labour events and pregnancy outcomes by trained research assistants at the participating health facilities. Prediction models will be developed to identify women at risk of intrapartum-related perinatal death or morbidity (primary outcomes) throughout the course of labour. These predictions models will be used to assemble a decision-support tool that will be able to suggest the best course of action to avert adverse outcomes during the course of labour. To develop this set of prediction models, we will use up-to-date techniques of prognostic research, including identification of important predictors, assigning of relative weights to each predictor, estimation of the predictive performance of the model through calibration and discrimination, and determination of its potential for application using internal validation techniques. This research offers an opportunity to revisit the theoretical basis of the partograph. It is envisioned that the final product would help providers overcome the challenging tasks of promptly interpreting complex labour information and deriving appropriate clinical actions, and thus increase efficiency of the care process, enhance providers' competence and ultimately improve labour outcomes. Please see related articles ' http://dx.doi.org/10.1186/s12978-015-0027-6 ' and ' http://dx.doi.org/10.1186/s12978-015-0028-5 '.
Marschollek, Michael; Rehwald, Anja; Wolf, Klaus-Hendrik; Gietzelt, Matthias; Nemitz, Gerhard; zu Schwabedissen, Hubertus Meyer; Schulze, Mareike
2011-06-28
Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach.
2011-01-01
Background Fall events contribute significantly to mortality, morbidity and costs in our ageing population. In order to identify persons at risk and to target preventive measures, many scores and assessment tools have been developed. These often require expertise and are costly to implement. Recent research investigates the use of wearable inertial sensors to provide objective data on motion features which can be used to assess individual fall risk automatically. So far it is unknown how well this new method performs in comparison with conventional fall risk assessment tools. The aim of our research is to compare the predictive performance of our new sensor-based method with conventional and established methods, based on prospective data. Methods In a first study phase, 119 inpatients of a geriatric clinic took part in motion measurements using a wireless triaxial accelerometer during a Timed Up&Go (TUG) test and a 20 m walk. Furthermore, the St. Thomas Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY) was performed, and the multidisciplinary geriatric care team estimated the patients' fall risk. In a second follow-up phase of the study, 46 of the participants were interviewed after one year, including a fall and activity assessment. The predictive performances of the TUG, the STRATIFY and team scores are compared. Furthermore, two automatically induced logistic regression models based on conventional clinical and assessment data (CONV) as well as sensor data (SENSOR) are matched. Results Among the risk assessment scores, the geriatric team score (sensitivity 56%, specificity 80%) outperforms STRATIFY and TUG. The induced logistic regression models CONV and SENSOR achieve similar performance values (sensitivity 68%/58%, specificity 74%/78%, AUC 0.74/0.72, +LR 2.64/2.61). Both models are able to identify more persons at risk than the simple scores. Conclusions Sensor-based objective measurements of motion parameters in geriatric patients can be used to assess individual fall risk, and our prediction model's performance matches that of a model based on conventional clinical and assessment data. Sensor-based measurements using a small wearable device may contribute significant information to conventional methods and are feasible in an unsupervised setting. More prospective research is needed to assess the cost-benefit relation of our approach. PMID:21711504
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdelaziz, Omar; Fricke, Brian A; Vineyard, Edward Allan
Commercial refrigeration systems are known to be prone to high leak rates and to consume large amounts of electricity. As such, direct emissions related to refrigerant leakage and indirect emissions resulting from primary energy consumption contribute greatly to their Life Cycle Climate Performance (LCCP). In this paper, an LCCP design tool is used to evaluate the performance of a typical commercial refrigeration system with alternative refrigerants and minor system modifications to provide lower Global Warming Potential (GWP) refrigerant solutions with improved LCCP compared to baseline systems. The LCCP design tool accounts for system performance, ambient temperature, and system load; systemmore » performance is evaluated using a validated vapor compression system simulation tool while ambient temperature and system load are devised from a widely used building energy modeling tool (EnergyPlus). The LCCP design tool also accounts for the change in hourly electricity emission rate to yield an accurate prediction of indirect emissions. The analysis shows that conventional commercial refrigeration system life cycle emissions are largely due to direct emissions associated with refrigerant leaks and that system efficiency plays a smaller role in the LCCP. However, as a transition occurs to low GWP refrigerants, the indirect emissions become more relevant. Low GWP refrigerants may not be suitable for drop-in replacements in conventional commercial refrigeration systems; however some mixtures may be introduced as transitional drop-in replacements. These transitional refrigerants have a significantly lower GWP than baseline refrigerants and as such, improved LCCP. The paper concludes with a brief discussion on the tradeoffs between refrigerant GWP, efficiency and capacity.« less
NASA Astrophysics Data System (ADS)
Leclercq, Sylvain; Lidbury, David; Van Dyck, Steven; Moinereau, Dominique; Alamo, Ana; Mazouzi, Abdou Al
2010-11-01
In nuclear power plants, materials may undergo degradation due to severe irradiation conditions that may limit their operational life. Utilities that operate these reactors need to quantify the ageing and the potential degradations of some essential structures of the power plant to ensure safe and reliable plant operation. So far, the material databases needed to take account of these degradations in the design and safe operation of installations mainly rely on long-term irradiation programs in test reactors as well as on mechanical or corrosion testing in specialized hot cells. Continuous progress in the physical understanding of the phenomena involved in irradiation damage and continuous progress in computer sciences have now made possible the development of multi-scale numerical tools able to simulate the effects of irradiation on materials microstructure. A first step towards this goal has been successfully reached through the development of the RPV-2 and Toughness Module numerical tools by the scientific community created around the FP6 PERFECT project. These tools allow to simulate irradiation effects on the constitutive behaviour of the reactor pressure vessel low alloy steel, and also on its failure properties. Relying on the existing PERFECT Roadmap, the 4 years Collaborative Project PERFORM 60 has mainly for objective to develop multi-scale tools aimed at predicting the combined effects of irradiation and corrosion on internals (austenitic stainless steels) and also to improve existing ones on RPV (bainitic steels). PERFORM 60 is based on two technical sub-projects: (i) RPV and (ii) internals. In addition to these technical sub-projects, the Users' Group and Training sub-project shall allow representatives of constructors, utilities, research organizations… from Europe, USA and Japan to receive the information and training to get their own appraisal on limits and potentialities of the developed tools. An important effort will also be made to teach young researchers in the field of materials' degradation. PERFORM 60 has officially started on March 1st, 2009 with 20 European organizations and Universities involved in the nuclear field.
Hunter, Christopher L; Silvestri, Salvatore; Ralls, George; Stone, Amanda; Walker, Ayanna; Mangalat, Neal; Papa, Linda
2018-05-01
Early identification of sepsis significantly improves outcomes, suggesting a role for prehospital screening. An end-tidal carbon dioxide (ETCO 2 ) value ≤ 25 mmHg predicts mortality and severe sepsis when used as part of a prehospital screening tool. Recently, the Quick Sequential Organ Failure Assessment (qSOFA) score was also derived as a tool for predicting poor outcomes in potentially septic patients. We conducted a retrospective cohort study among patients transported by emergency medical services to compare the use of ETCO 2 ≤ 25 mmHg with qSOFA score of ≥ 2 as a predictor of mortality or diagnosis of severe sepsis in prehospital patients with suspected sepsis. By comparison of receiver operator characteristic curves, ETCO 2 had a higher discriminatory power to predict mortality, sepsis, and severe sepsis than qSOFA. Both non-invasive measures were easily obtainable by prehospital personnel, with ETCO 2 performing slightly better as an outcome predictor.
Exfoliative cytology: a helpful tool for the diagnosis of paracoccidioidomycosis.
Cardoso, S V; Moreti, M M; Costa, I M; Loyola, A M
2001-07-01
To describe the main cytological findings associated with smears collected from oral lesions of paracoccidioidomycosis and to appraise the use of cytology as a diagnostic tool for the disease. Cytological smears and biopsies were collected from 40 lesions with a clinical suspicion of paracoccidioidomycosis. Evaluation of the sensitivity, specificity, positive and negative predictive values, accuracy and the positive likeness ratio of the oral smear when compared with the histological diagnosis, was performed. The latter is considered the 'gold standard' for comparison. The main morphological findings were the rounded-shaped, birefringent and multiple-budded fungi, Langhans' giant cells and epithelioid cells. The following associative measures were found: sensitivity, 67.9%; specificity, 91.7%; positive predictive value, 95.0%; negative predictive value, 55.0%; accuracy, 75.0%; and positive likeness ratio, 8.14. The cytological findings of paracoccidioidomycosis are characteristic and cytology is accurate in the diagnosis of the disease. Positive patients should be treated. Negative patients should be submitted to biopsy to confirm or to dismiss the diagnosis of this mycosis.
Machine learning models for lipophilicity and their domain of applicability.
Schroeter, Timon; Schwaighofer, Anton; Mika, Sebastian; Laak, Antonius Ter; Suelzle, Detlev; Ganzer, Ursula; Heinrich, Nikolaus; Müller, Klaus-Robert
2007-01-01
Unfavorable lipophilicity and water solubility cause many drug failures; therefore these properties have to be taken into account early on in lead discovery. Commercial tools for predicting lipophilicity usually have been trained on small and neutral molecules, and are thus often unable to accurately predict in-house data. Using a modern Bayesian machine learning algorithm--a Gaussian process model--this study constructs a log D7 model based on 14,556 drug discovery compounds of Bayer Schering Pharma. Performance is compared with support vector machines, decision trees, ridge regression, and four commercial tools. In a blind test on 7013 new measurements from the last months (including compounds from new projects) 81% were predicted correctly within 1 log unit, compared to only 44% achieved by commercial software. Additional evaluations using public data are presented. We consider error bars for each method (model based error bars, ensemble based, and distance based approaches), and investigate how well they quantify the domain of applicability of each model.
Isotope and Chemical Methods in Support of the U.S. Geological Survey Science Strategy, 2003-2008
Rye, R.O.; Johnson, C.A.; Landis, G.P.; Hofstra, A.H.; Emsbo, P.; Stricker, C.A.; Hunt, A.G.; Rusk, B.G.
2008-01-01
Principal functions of the Mineral Resources Program are providing information to decision-makers related to mineral deposits on federal lands and predicting the environmental consequences of the mining or natural weathering of those deposits. Performing these functions requires that predictions be made of the likelihood of undiscovered deposits. The predictions are based on geologic and geoenvironmental models that are constructed for the various types of mineral deposits from detailed descriptions of actual deposits and detailed understanding of the processes that formed them. Over the past three decades the understanding of ore-forming processes has benefitted greatly from the integration of laboratory-based geochemical tools with field observations and other data sources. Under the aegis of the Evolution of Ore Deposits and Technology Transfer Project (EODTTP), a five-year effort that terminated in 2008, the Mineral Resources Program provided state-of-the-art analytical capabilities to support applications of several related geochemical tools.
A global model for steady state and transient S.I. engine heat transfer studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bohac, S.V.; Assanis, D.N.; Baker, D.M.
1996-09-01
A global, systems-level model which characterizes the thermal behavior of internal combustion engines is described in this paper. Based on resistor-capacitor thermal networks, either steady-state or transient thermal simulations can be performed. A two-zone, quasi-dimensional spark-ignition engine simulation is used to determine in-cylinder gas temperature and convection coefficients. Engine heat fluxes and component temperatures can subsequently be predicted from specification of general engine dimensions, materials, and operating conditions. Emphasis has been placed on minimizing the number of model inputs and keeping them as simple as possible to make the model practical and useful as an early design tool. The successmore » of the global model depends on properly scaling the general engine inputs to accurately model engine heat flow paths across families of engine designs. The development and validation of suitable, scalable submodels is described in detail in this paper. Simulation sub-models and overall system predictions are validated with data from two spark ignition engines. Several sensitivity studies are performed to determine the most significant heat transfer paths within the engine and exhaust system. Overall, it has been shown that the model is a powerful tool in predicting steady-state heat rejection and component temperatures, as well as transient component temperatures.« less
Development and Overview of CPAS Sasquatch Airdrop Landing Location Predictor Software
NASA Technical Reports Server (NTRS)
Bledsoe, Kristin J.; Bernatovich, Michael A.
2015-01-01
The Capsule Parachute Assembly System (CPAS) is the parachute system for NASA's Orion spacecraft. CPAS is currently in the Engineering Development Unit (EDU) phase of testing. The test program consists of numerous drop tests, wherein a test article rigged with parachutes is extracted from an aircraft. During such tests, range safety is paramount, as is the recoverability of the parachutes and test article. It is crucial to establish a release point from the aircraft that will ensure that the article and all items released from it during flight will land in a designated safe area. The Sasquatch footprint tool was developed to determine this safe release point and to predict the probable landing locations (footprints) of the payload and all released objects. In 2012, a new version of Sasquatch, called Sasquatch Polygons, was developed that significantly upgraded the capabilities of the footprint tool. Key improvements were an increase in the accuracy of the predictions, and the addition of an interface with the Debris Tool (DT), an in-flight debris avoidance tool for use on the test observation helicopter. Additional enhancements include improved data presentation for communication with test personnel and a streamlined code structure. This paper discusses the development, validation, and performance of Sasquatch Polygons, as well as its differences from the original Sasquatch footprint tool.
PepMapper: a collaborative web tool for mapping epitopes from affinity-selected peptides.
Chen, Wenhan; Guo, William W; Huang, Yanxin; Ma, Zhiqiang
2012-01-01
Epitope mapping from affinity-selected peptides has become popular in epitope prediction, and correspondingly many Web-based tools have been developed in recent years. However, the performance of these tools varies in different circumstances. To address this problem, we employed an ensemble approach to incorporate two popular Web tools, MimoPro and Pep-3D-Search, together for taking advantages offered by both methods so as to give users more options for their specific purposes of epitope-peptide mapping. The combined operation of Union finds as many associated peptides as possible from both methods, which increases sensitivity in finding potential epitopic regions on a given antigen surface. The combined operation of Intersection achieves to some extent the mutual verification by the two methods and hence increases the likelihood of locating the genuine epitopic region on a given antigen in relation to the interacting peptides. The Consistency between Intersection and Union is an indirect sufficient condition to assess the likelihood of successful peptide-epitope mapping. On average from 27 tests, the combined operations of PepMapper outperformed either MimoPro or Pep-3D-Search alone. Therefore, PepMapper is another multipurpose mapping tool for epitope prediction from affinity-selected peptides. The Web server can be freely accessed at: http://informatics.nenu.edu.cn/PepMapper/
The development of a tool to predict team performance.
Sinclair, M A; Siemieniuch, C E; Haslam, R A; Henshaw, M J D C; Evans, L
2012-01-01
The paper describes the development of a tool to predict quantitatively the success of a team when executing a process. The tool was developed for the UK defence industry, though it may be useful in other domains. It is expected to be used by systems engineers in initial stages of systems design, when concepts are still fluid, including the structure of the team(s) which are expected to be operators within the system. It enables answers to be calculated for questions such as "What happens if I reduce team size?" and "Can I reduce the qualifications necessary to execute this process and still achieve the required level of success?". The tool has undergone verification and validation; it predicts fairly well and shows promise. An unexpected finding is that the tool creates a good a priori argument for significant attention to Human Factors Integration in systems projects. The simulations show that if a systems project takes full account of human factors integration (selection, training, process design, interaction design, culture, etc.) then the likelihood of team success will be in excess of 0.95. As the project derogates from this state, the likelihood of team success will drop as low as 0.05. If the team has good internal communications and good individuals in key roles, the likelihood of success rises towards 0.25. Even with a team comprising the best individuals, p(success) will not be greater than 0.35. It is hoped that these results will be useful for human factors professionals involved in systems design. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Riaz, Umbreen; Shah, Syed Aslam; Zahoor, Imran; Riaz, Arsalan; Zubair, Muhammad
2014-07-01
To determine the validity of early (one hour postoperatively) parathyroid hormone (PTH) assay (² 10 pg/ml), keeping gold standard as the serum ionic calcium level, for predicting sub-total thyroidectomy-related hypocalcaemia and to calculate the sensitivity and specificity of latent signs of tetany. Cross-sectional validation study. Department of General Surgery, Pakistan Institute of Medical Sciences, Islamabad from August 2008 to August 2010. Patients undergoing sub-total thyroidectomy were included by convenience sampling. PTH assay was performed 1 hour post sub-total thyroidectomy. Serum calcium levels were performed at 24 and 48 hours, 5th day and 2 weeks after surgery. Cases that developed hypocalcaemia were followed-up for a period of 6 months with monthly calcium level estimation to identify cases of permanent hypocalcaemia. Symptoms and signs of hypocalcaemia manifesting in our patients were recorded. Data was analyzed through SPSS version 10. 2 x 2 tables were used to calculate sensitivity and specificity of PTH in detecting post-thyroidectomy hypocalcaemia. Out of a total of 110 patients included in the study, 16.36% (n=18) developed hypocalcaemia including 1.81% (n=2) cases of permanent hypoparathyroidism. The sensitivity of one hour postoperative PTH assay as a predictive tool for post-thyroidectomy related hypocalcaemia was 94.4% while its specificity was 83.6% with 53% positive predictive value and 98.7% negative predictive value. One hour post sub-total thyroidectomy PTH assay can be helpful in predicting post sub-total thyroidectomy hypocalcaemia. Moreover, it can be useful in safe discharge of day-care thyroidectomy patients.
NASA Astrophysics Data System (ADS)
Kadow, C.; Illing, S.; Kunst, O.; Cubasch, U.
2014-12-01
The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced by other users-saving CPU time, I/O and disk space. This study presents the different techniques and advantages of such a hybrid evaluation system making use of a Big Data HPC in climate science. website: www-miklip.dkrz.de visitor-login: guest password: miklip
NASA Astrophysics Data System (ADS)
Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Ulbrich, Uwe; Cubasch, Ulrich
2015-04-01
The project 'Integrated Data and Evaluation System for Decadal Scale Prediction' (INTEGRATION) as part of the German decadal prediction project MiKlip develops a central evaluation system. The fully operational hybrid features a HPC shell access and an user friendly web-interface. It employs one common system with a variety of verification tools and validation data from different projects in- and outside of MiKlip. The evaluation system is located at the German Climate Computing Centre (DKRZ) and has direct access to the bulk of its ESGF node including millions of climate model data sets, e.g. from CMIP5 and CORDEX. The database is organized by the international CMOR standard using the meta information of the self-describing model, reanalysis and observational data sets. Apache Solr is used for indexing the different data projects into one common search environment. This implemented meta data system with its advanced but easy to handle search tool supports users, developers and their tools to retrieve the required information. A generic application programming interface (API) allows scientific developers to connect their analysis tools with the evaluation system independently of the programming language used. Users of the evaluation techniques benefit from the common interface of the evaluation system without any need to understand the different scripting languages. Facilitating the provision and usage of tools and climate data increases automatically the number of scientists working with the data sets and identify discrepancies. Additionally, the history and configuration sub-system stores every analysis performed with the evaluation system in a MySQL database. Configurations and results of the tools can be shared among scientists via shell or web-system. Therefore, plugged-in tools gain automatically from transparency and reproducibility. Furthermore, when configurations match while starting a evaluation tool, the system suggests to use results already produced by other users-saving CPU time, I/O and disk space. This study presents the different techniques and advantages of such a hybrid evaluation system making use of a Big Data HPC in climate science. website: www-miklip.dkrz.de visitor-login: click on "Guest"
SWAT system performance predictions
NASA Astrophysics Data System (ADS)
Parenti, Ronald R.; Sasiela, Richard J.
1993-03-01
In the next phase of Lincoln Laboratory's SWAT (Short-Wavelength Adaptive Techniques) program, the performance of a 241-actuator adaptive-optics system will be measured using a variety of synthetic-beacon geometries. As an aid in this experimental investigation, a detailed set of theoretical predictions has also been assembled. The computational tools that have been applied in this study include a numerical approach in which Monte-Carlo ray-trace simulations of accumulated phase error are developed, and an analytical analysis of the expected system behavior. This report describes the basis of these two computational techniques and compares their estimates of overall system performance. Although their regions of applicability tend to be complementary rather than redundant, good agreement is usually obtained when both sets of results can be derived for the same engagement scenario.
Cognitive performance predicts treatment decisional abilities in mild to moderate dementia.
Gurrera, R J; Moye, J; Karel, M J; Azar, A R; Armesto, J C
2006-05-09
To examine the contribution of neuropsychological test performance to treatment decision-making capacity in community volunteers with mild to moderate dementia. The authors recruited volunteers (44 men, 44 women) with mild to moderate dementia from the community. Subjects completed a battery of 11 neuropsychological tests that assessed auditory and visual attention, logical memory, language, and executive function. To measure decision making capacity, the authors administered the Capacity to Consent to Treatment Interview, the Hopemont Capacity Assessment Interview, and the MacCarthur Competence Assessment Tool--Treatment. Each of these instruments individually scores four decisional abilities serving capacity: understanding, appreciation, reasoning, and expression of choice. The authors used principal components analysis to generate component scores for each ability across instruments, and to extract principal components for neuropsychological performance. Multiple linear regression analyses demonstrated that neuropsychological performance significantly predicted all four abilities. Specifically, it predicted 77.8% of the common variance for understanding, 39.4% for reasoning, 24.6% for appreciation, and 10.2% for expression of choice. Except for reasoning and appreciation, neuropsychological predictor (beta) profiles were unique for each ability. Neuropsychological performance substantially and differentially predicted capacity for treatment decisions in individuals with mild to moderate dementia. Relationships between elemental cognitive function and decisional capacity may differ in individuals whose decisional capacity is impaired by other disorders, such as mental illness.