Statistical prediction of space motion sickness
NASA Technical Reports Server (NTRS)
Reschke, Millard F.
1990-01-01
Studies designed to empirically examine the etiology of motion sickness to develop a foundation for enhancing its prediction are discussed. Topics addressed include early attempts to predict space motion sickness, multiple test data base that uses provocative and vestibular function tests, and data base subjects; reliability of provocative tests of motion sickness susceptibility; prediction of space motion sickness using linear discriminate analysis; and prediction of space motion sickness susceptibility using the logistic model.
Extending Theory-Based Quantitative Predictions to New Health Behaviors.
Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O
2016-04-01
Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.
Testing an earthquake prediction algorithm
Kossobokov, V.G.; Healy, J.H.; Dewey, J.W.
1997-01-01
A test to evaluate earthquake prediction algorithms is being applied to a Russian algorithm known as M8. The M8 algorithm makes intermediate term predictions for earthquakes to occur in a large circle, based on integral counts of transient seismicity in the circle. In a retroactive prediction for the period January 1, 1985 to July 1, 1991 the algorithm as configured for the forward test would have predicted eight of ten strong earthquakes in the test area. A null hypothesis, based on random assignment of predictions, predicts eight earthquakes in 2.87% of the trials. The forward test began July 1, 1991 and will run through December 31, 1997. As of July 1, 1995, the algorithm had forward predicted five out of nine earthquakes in the test area, which success ratio would have been achieved in 53% of random trials with the null hypothesis.
Testing 40 Predictions from the Transtheoretical Model Again, with Confidence
ERIC Educational Resources Information Center
Velicer, Wayne F.; Brick, Leslie Ann D.; Fava, Joseph L.; Prochaska, James O.
2013-01-01
Testing Theory-based Quantitative Predictions (TTQP) represents an alternative to traditional Null Hypothesis Significance Testing (NHST) procedures and is more appropriate for theory testing. The theory generates explicit effect size predictions and these effect size estimates, with related confidence intervals, are used to test the predictions.…
NASA Technical Reports Server (NTRS)
Macwilkinson, D. G.; Blackerby, W. T.; Paterson, J. H.
1974-01-01
The degree of cruise drag correlation on the C-141A aircraft is determined between predictions based on wind tunnel test data, and flight test results. An analysis of wind tunnel tests on a 0.0275 scale model at Reynolds number up to 3.05 x 1 million/MAC is reported. Model support interference corrections are evaluated through a series of tests, and fully corrected model data are analyzed to provide details on model component interference factors. It is shown that predicted minimum profile drag for the complete configuration agrees within 0.75% of flight test data, using a wind tunnel extrapolation method based on flat plate skin friction and component shape factors. An alternative method of extrapolation, based on computed profile drag from a subsonic viscous theory, results in a prediction four percent lower than flight test data.
2011-01-01
Background Allergic contact dermatitis is an inflammatory skin disease that affects a significant proportion of the population. This disease is caused by an adverse immune response towards chemical haptens, and leads to a substantial economic burden for society. Current test of sensitizing chemicals rely on animal experimentation. New legislations on the registration and use of chemicals within pharmaceutical and cosmetic industries have stimulated significant research efforts to develop alternative, human cell-based assays for the prediction of sensitization. The aim is to replace animal experiments with in vitro tests displaying a higher predictive power. Results We have developed a novel cell-based assay for the prediction of sensitizing chemicals. By analyzing the transcriptome of the human cell line MUTZ-3 after 24 h stimulation, using 20 different sensitizing chemicals, 20 non-sensitizing chemicals and vehicle controls, we have identified a biomarker signature of 200 genes with potent discriminatory ability. Using a Support Vector Machine for supervised classification, the prediction performance of the assay revealed an area under the ROC curve of 0.98. In addition, categorizing the chemicals according to the LLNA assay, this gene signature could also predict sensitizing potency. The identified markers are involved in biological pathways with immunological relevant functions, which can shed light on the process of human sensitization. Conclusions A gene signature predicting sensitization, using a human cell line in vitro, has been identified. This simple and robust cell-based assay has the potential to completely replace or drastically reduce the utilization of test systems based on experimental animals. Being based on human biology, the assay is proposed to be more accurate for predicting sensitization in humans, than the traditional animal-based tests. PMID:21824406
The Prediction of Item Parameters Based on Classical Test Theory and Latent Trait Theory
ERIC Educational Resources Information Center
Anil, Duygu
2008-01-01
In this study, the prediction power of the item characteristics based on the experts' predictions on conditions try-out practices cannot be applied was examined for item characteristics computed depending on classical test theory and two-parameters logistic model of latent trait theory. The study was carried out on 9914 randomly selected students…
Mitchell, Travis D.; Urli, Kristina E.; Breitenbach, Jacques; Yelverton, Chris
2007-01-01
Abstract Objective This study aimed to evaluate the validity of the sacral base pressure test in diagnosing sacroiliac joint dysfunction. It also determined the predictive powers of the test in determining which type of sacroiliac joint dysfunction was present. Methods This was a double-blind experimental study with 62 participants. The results from the sacral base pressure test were compared against a cluster of previously validated tests of sacroiliac joint dysfunction to determine its validity and predictive powers. The external rotation of the feet, occurring during the sacral base pressure test, was measured using a digital inclinometer. Results There was no statistically significant difference in the results of the sacral base pressure test between the types of sacroiliac joint dysfunction. In terms of the results of validity, the sacral base pressure test was useful in identifying positive values of sacroiliac joint dysfunction. It was fairly helpful in correctly diagnosing patients with negative test results; however, it had only a “slight” agreement with the diagnosis for κ interpretation. Conclusions In this study, the sacral base pressure test was not a valid test for determining the presence of sacroiliac joint dysfunction or the type of dysfunction present. Further research comparing the agreement of the sacral base pressure test or other sacroiliac joint dysfunction tests with a criterion standard of diagnosis is necessary. PMID:19674694
NASA Astrophysics Data System (ADS)
Yilmaz, Diba; Tekkaya, Ceren; Sungur, Semra
2011-03-01
The present study examined the comparative effects of a prediction/discussion-based learning cycle, conceptual change text (CCT), and traditional instructions on students' understanding of genetics concepts. A quasi-experimental research design of the pre-test-post-test non-equivalent control group was adopted. The three intact classes, taught by the same science teacher, were randomly assigned as prediction/discussion-based learning cycle class (N = 30), CCT class (N = 25), and traditional class (N = 26). Participants completed the genetics concept test as pre-test, post-test, and delayed post-test to examine the effects of instructional strategies on their genetics understanding and retention. While the dependent variable of this study was students' understanding of genetics, the independent variables were time (Time 1, Time 2, and Time 3) and mode of instruction. The mixed between-within subjects analysis of variance revealed that students in both prediction/discussion-based learning cycle and CCT groups understood the genetics concepts and retained their knowledge significantly better than students in the traditional instruction group.
Wombacher, Kevin; Dai, Minhao; Matig, Jacob J; Harrington, Nancy Grant
2018-03-22
To identify salient behavioral determinants related to STI testing among college students by testing a model based on the integrative model of behavioral (IMBP) prediction. 265 undergraduate students from a large university in the Southeastern US. Formative and survey research to test an IMBP-based model that explores the relationships between determinants and STI testing intention and behavior. Results of path analyses supported a model in which attitudinal beliefs predicted intention and intention predicted behavior. Normative beliefs and behavioral control beliefs were not significant in the model; however, select individual normative and control beliefs were significantly correlated with intention and behavior. Attitudinal beliefs are the strongest predictor of STI testing intention and behavior. Future efforts to increase STI testing rates should identify and target salient attitudinal beliefs.
Power load prediction based on GM (1,1)
NASA Astrophysics Data System (ADS)
Wu, Di
2017-05-01
Currently, Chinese power load prediction is highly focused; the paper deeply studies grey prediction and applies it to Chinese electricity consumption during the recent 14 years; through after-test test, it obtains grey prediction which has good adaptability to medium and long-term power load.
Base Rates, Contingencies, and Prediction Behavior
ERIC Educational Resources Information Center
Kareev, Yaakov; Fiedler, Klaus; Avrahami, Judith
2009-01-01
A skew in the base rate of upcoming events can often provide a better cue for accurate predictions than a contingency between signals and events. The authors study prediction behavior and test people's sensitivity to both base rate and contingency; they also examine people's ability to compare the benefits of both for prediction. They formalize…
NASA Astrophysics Data System (ADS)
Wang, F.; Annable, M. D.; Jawitz, J. W.
2012-12-01
The equilibrium streamtube model (EST) has demonstrated the ability to accurately predict dense nonaqueous phase liquid (DNAPL) dissolution in laboratory experiments and numerical simulations. Here the model is applied to predict DNAPL dissolution at a PCE-contaminated dry cleaner site, located in Jacksonville, Florida. The EST is an analytical solution with field-measurable input parameters. Here, measured data from a field-scale partitioning tracer test were used to parameterize the EST model and the predicted PCE dissolution was compared to measured data from an in-situ alcohol (ethanol) flood. In addition, a simulated partitioning tracer test from a calibrated spatially explicit multiphase flow model (UTCHEM) was also used to parameterize the EST analytical solution. The ethanol prediction based on both the field partitioning tracer test and the UTCHEM tracer test simulation closely matched the field data. The PCE EST prediction showed a peak shift to an earlier arrival time that was concluded to be caused by well screen interval differences between the field tracer test and alcohol flood. This observation was based on a modeling assessment of potential factors that may influence predictions by using UTCHEM simulations. The imposed injection and pumping flow pattern at this site for both the partitioning tracer test and alcohol flood was more complex than the natural gradient flow pattern (NGFP). Both the EST model and UTCHEM were also used to predict PCE dissolution under natural gradient conditions, with much simpler flow patterns than the forced-gradient double five spot of the alcohol flood. The NGFP predictions based on parameters determined from tracer tests conducted with complex flow patterns underestimated PCE concentrations and total mass removal. This suggests that the flow patterns influence aqueous dissolution and that the aqueous dissolution under the NGFP is more efficient than dissolution under complex flow patterns.
Testing the Predictive Power of Coulomb Stress on Aftershock Sequences
NASA Astrophysics Data System (ADS)
Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.
2009-12-01
Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.
Flight-Test Evaluation of Flutter-Prediction Methods
NASA Technical Reports Server (NTRS)
Lind, RIck; Brenner, Marty
2003-01-01
The flight-test community routinely spends considerable time and money to determine a range of flight conditions, called a flight envelope, within which an aircraft is safe to fly. The cost of determining a flight envelope could be greatly reduced if there were a method of safely and accurately predicting the speed associated with the onset of an instability called flutter. Several methods have been developed with the goal of predicting flutter speeds to improve the efficiency of flight testing. These methods include (1) data-based methods, in which one relies entirely on information obtained from the flight tests and (2) model-based approaches, in which one relies on a combination of flight data and theoretical models. The data-driven methods include one based on extrapolation of damping trends, one that involves an envelope function, one that involves the Zimmerman-Weissenburger flutter margin, and one that involves a discrete-time auto-regressive model. An example of a model-based approach is that of the flutterometer. These methods have all been shown to be theoretically valid and have been demonstrated on simple test cases; however, until now, they have not been thoroughly evaluated in flight tests. An experimental apparatus called the Aerostructures Test Wing (ATW) was developed to test these prediction methods.
Caffo, Brian; Diener-West, Marie; Punjabi, Naresh M.; Samet, Jonathan
2010-01-01
This manuscript considers a data-mining approach for the prediction of mild obstructive sleep disordered breathing, defined as an elevated respiratory disturbance index (RDI), in 5,530 participants in a community-based study, the Sleep Heart Health Study. The prediction algorithm was built using modern ensemble learning algorithms, boosting in specific, which allowed for assessing potential high-dimensional interactions between predictor variables or classifiers. To evaluate the performance of the algorithm, the data were split into training and validation sets for varying thresholds for predicting the probability of a high RDI (≥ 7 events per hour in the given results). Based on a moderate classification threshold from the boosting algorithm, the estimated post-test odds of a high RDI were 2.20 times higher than the pre-test odds given a positive test, while the corresponding post-test odds were decreased by 52% given a negative test (sensitivity and specificity of 0.66 and 0.70, respectively). In rank order, the following variables had the largest impact on prediction performance: neck circumference, body mass index, age, snoring frequency, waist circumference, and snoring loudness. Citation: Caffo B; Diener-West M; Punjabi NM; Samet J. A novel approach to prediction of mild obstructive sleep disordered breathing in a population-based sample: the Sleep Heart Health Study. SLEEP 2010;33(12):1641-1648. PMID:21120126
The construction of life prediction models for the design of Stirling engine heater components
NASA Technical Reports Server (NTRS)
Petrovich, A.; Bright, A.; Cronin, M.; Arnold, S.
1983-01-01
The service life of Stirling-engine heater structures of Fe-based high-temperature alloys is predicted using a numerical model based on a linear-damage approach and published test data (engine test data for a Co-based alloy and tensile-test results for both the Co-based and the Fe-based alloys). The operating principle of the automotive Stirling engine is reviewed; the economic and technical factors affecting the choice of heater material are surveyed; the test results are summarized in tables and graphs; the engine environment and automotive duty cycle are characterized; and the modeling procedure is explained. It is found that the statistical scatter of the fatigue properties of the heater components needs to be reduced (by decreasing the porosity of the cast material or employing wrought material in fatigue-prone locations) before the accuracy of life predictions can be improved.
NASA Astrophysics Data System (ADS)
Wu, Z. R.; Li, X.; Fang, L.; Song, Y. D.
2018-04-01
A new multiaxial fatigue life prediction model has been proposed in this paper. The concepts of nonlinear continuum damage mechanics and critical plane criteria were incorporated in the proposed model. The shear strain-based damage control parameter was chosen to account for multiaxial fatigue damage under constant amplitude loading. Fatigue tests were conducted on nickel-based superalloy GH4169 tubular specimens at the temperature of 400 °C under proportional and nonproportional loading. The proposed method was checked against the multiaxial fatigue test data of GH4169. Most of prediction results are within a factor of two scatter band of the test results.
ERIC Educational Resources Information Center
Meylani, Rusen; Bitter, Gary G.; Castaneda, Rene
2014-01-01
In this study regression and neural networks based methods are used to predict statewide high-stakes test results for middle school mathematics using the scores obtained from third party tests throughout the school year. Such prediction is of utmost significance for school districts to live up to the state's educational standards mandated by the…
Mated vertical ground vibration test
NASA Technical Reports Server (NTRS)
Ivey, E. W.
1980-01-01
The Mated Vertical Ground Vibration Test (MVGVT) was considered to provide an experimental base in the form of structural dynamic characteristics for the shuttle vehicle. This data base was used in developing high confidence analytical models for the prediction and design of loads, pogo controls, and flutter criteria under various payloads and operational missions. The MVGVT boost and launch program evolution, test configurations, and their suspensions are described. Test results are compared with predicted analytical results.
Mateen, Bilal Akhter; Bussas, Matthias; Doogan, Catherine; Waller, Denise; Saverino, Alessia; Király, Franz J; Playford, E Diane
2018-05-01
To determine whether tests of cognitive function and patient-reported outcome measures of motor function can be used to create a machine learning-based predictive tool for falls. Prospective cohort study. Tertiary neurological and neurosurgical center. In all, 337 in-patients receiving neurosurgical, neurological, or neurorehabilitation-based care. Binary (Y/N) for falling during the in-patient episode, the Trail Making Test (a measure of attention and executive function) and the Walk-12 (a patient-reported measure of physical function). The principal outcome was a fall during the in-patient stay ( n = 54). The Trail test was identified as the best predictor of falls. Moreover, addition of other variables, did not improve the prediction (Wilcoxon signed-rank P < 0.001). Classical linear statistical modeling methods were then compared with more recent machine learning based strategies, for example, random forests, neural networks, support vector machines. The random forest was the best modeling strategy when utilizing just the Trail Making Test data (Wilcoxon signed-rank P < 0.001) with 68% (± 7.7) sensitivity, and 90% (± 2.3) specificity. This study identifies a simple yet powerful machine learning (Random Forest) based predictive model for an in-patient neurological population, utilizing a single neuropsychological test of cognitive function, the Trail Making test.
Wright, Julie A.; Velicer, Wayne F.; Prochaska, James O.
2009-01-01
This study evaluated how well predictions from the transtheoretical model (TTM) generalized from smoking to diet. Longitudinal data were used from a randomized control trial on reducing dietary fat consumption in adults (n =1207) recruited from primary care practices. Predictive power was evaluated by making a priori predictions of the magnitude of change expected in the TTM constructs of temptation, pros and cons, and 10 processes of change when an individual transitions between the stages of change. Generalizability was evaluated by testing predictions based on smoking data. Three sets of predictions were made for each stage: Precontemplation (PC), Contemplation (C) and Preparation (PR) based on stage transition categories of no progress, progress and regression determined by stage at baseline versus stage at the 12-month follow-up. Univariate analysis of variance between stage transition groups was used to calculate the effect size [omega squared (ω2)]. For diet predictions based on diet data, there was a high degree of confirmation: 92%, 95% and 92% for PC, C and PR, respectively. For diet predictions based on smoking data, 77%, 79% and 85% were confirmed, respectively, suggesting a moderate degree of generalizability. This study revised effect size estimates for future theory testing on the TTM applied to dietary fat. PMID:18400785
Genome-based prediction of test cross performance in two subsequent breeding cycles.
Hofheinz, Nina; Borchardt, Dietrich; Weissleder, Knuth; Frisch, Matthias
2012-12-01
Genome-based prediction of genetic values is expected to overcome shortcomings that limit the application of QTL mapping and marker-assisted selection in plant breeding. Our goal was to study the genome-based prediction of test cross performance with genetic effects that were estimated using genotypes from the preceding breeding cycle. In particular, our objectives were to employ a ridge regression approach that approximates best linear unbiased prediction of genetic effects, compare cross validation with validation using genetic material of the subsequent breeding cycle, and investigate the prospects of genome-based prediction in sugar beet breeding. We focused on the traits sugar content and standard molasses loss (ML) and used a set of 310 sugar beet lines to estimate genetic effects at 384 SNP markers. In cross validation, correlations >0.8 between observed and predicted test cross performance were observed for both traits. However, in validation with 56 lines from the next breeding cycle, a correlation of 0.8 could only be observed for sugar content, for standard ML the correlation reduced to 0.4. We found that ridge regression based on preliminary estimates of the heritability provided a very good approximation of best linear unbiased prediction and was not accompanied with a loss in prediction accuracy. We conclude that prediction accuracy assessed with cross validation within one cycle of a breeding program can not be used as an indicator for the accuracy of predicting lines of the next cycle. Prediction of lines of the next cycle seems promising for traits with high heritabilities.
Cohen, Jérémie F.; Cohen, Robert; Levy, Corinne; Thollot, Franck; Benani, Mohamed; Bidet, Philippe; Chalumeau, Martin
2015-01-01
Background: Several clinical prediction rules for diagnosing group A streptococcal infection in children with pharyngitis are available. We aimed to compare the diagnostic accuracy of rules-based selective testing strategies in a prospective cohort of children with pharyngitis. Methods: We identified clinical prediction rules through a systematic search of MEDLINE and Embase (1975–2014), which we then validated in a prospective cohort involving French children who presented with pharyngitis during a 1-year period (2010–2011). We diagnosed infection with group A streptococcus using two throat swabs: one obtained for a rapid antigen detection test (StreptAtest, Dectrapharm) and one obtained for culture (reference standard). We validated rules-based selective testing strategies as follows: low risk of group A streptococcal infection, no further testing or antibiotic therapy needed; intermediate risk of infection, rapid antigen detection for all patients and antibiotic therapy for those with a positive test result; and high risk of infection, empiric antibiotic treatment. Results: We identified 8 clinical prediction rules, 6 of which could be prospectively validated. Sensitivity and specificity of rules-based selective testing strategies ranged from 66% (95% confidence interval [CI] 61–72) to 94% (95% CI 92–97) and from 40% (95% CI 35–45) to 88% (95% CI 85–91), respectively. Use of rapid antigen detection testing following the clinical prediction rule ranged from 24% (95% CI 21–27) to 86% (95% CI 84–89). None of the rules-based selective testing strategies achieved our diagnostic accuracy target (sensitivity and specificity > 85%). Interpretation: Rules-based selective testing strategies did not show sufficient diagnostic accuracy in this study population. The relevance of clinical prediction rules for determining which children with pharyngitis should undergo a rapid antigen detection test remains questionable. PMID:25487666
Dragovic, Sanja; Vermeulen, Nico P E; Gerets, Helga H; Hewitt, Philip G; Ingelman-Sundberg, Magnus; Park, B Kevin; Juhila, Satu; Snoeys, Jan; Weaver, Richard J
2016-12-01
The current test systems employed by pharmaceutical industry are poorly predictive for drug-induced liver injury (DILI). The 'MIP-DILI' project addresses this situation by the development of innovative preclinical test systems which are both mechanism-based and of physiological, pharmacological and pathological relevance to DILI in humans. An iterative, tiered approach with respect to test compounds, test systems, bioanalysis and systems analysis is adopted to evaluate existing models and develop new models that can provide validated test systems with respect to the prediction of specific forms of DILI and further elucidation of mechanisms. An essential component of this effort is the choice of compound training set that will be used to inform refinement and/or development of new model systems that allow prediction based on knowledge of mechanisms, in a tiered fashion. In this review, we focus on the selection of MIP-DILI training compounds for mechanism-based evaluation of non-clinical prediction of DILI. The selected compounds address both hepatocellular and cholestatic DILI patterns in man, covering a broad range of pharmacologies and chemistries, and taking into account available data on potential DILI mechanisms (e.g. mitochondrial injury, reactive metabolites, biliary transport inhibition, and immune responses). Known mechanisms by which these compounds are believed to cause liver injury have been described, where many if not all drugs in this review appear to exhibit multiple toxicological mechanisms. Thus, the training compounds selection offered a valuable tool to profile DILI mechanisms and to interrogate existing and novel in vitro systems for the prediction of human DILI.
Pesesky, Mitchell W; Hussain, Tahir; Wallace, Meghan; Patel, Sanket; Andleeb, Saadia; Burnham, Carey-Ann D; Dantas, Gautam
2016-01-01
The time-to-result for culture-based microorganism recovery and phenotypic antimicrobial susceptibility testing necessitates initial use of empiric (frequently broad-spectrum) antimicrobial therapy. If the empiric therapy is not optimal, this can lead to adverse patient outcomes and contribute to increasing antibiotic resistance in pathogens. New, more rapid technologies are emerging to meet this need. Many of these are based on identifying resistance genes, rather than directly assaying resistance phenotypes, and thus require interpretation to translate the genotype into treatment recommendations. These interpretations, like other parts of clinical diagnostic workflows, are likely to be increasingly automated in the future. We set out to evaluate the two major approaches that could be amenable to automation pipelines: rules-based methods and machine learning methods. The rules-based algorithm makes predictions based upon current, curated knowledge of Enterobacteriaceae resistance genes. The machine-learning algorithm predicts resistance and susceptibility based on a model built from a training set of variably resistant isolates. As our test set, we used whole genome sequence data from 78 clinical Enterobacteriaceae isolates, previously identified to represent a variety of phenotypes, from fully-susceptible to pan-resistant strains for the antibiotics tested. We tested three antibiotic resistance determinant databases for their utility in identifying the complete resistome for each isolate. The predictions of the rules-based and machine learning algorithms for these isolates were compared to results of phenotype-based diagnostics. The rules based and machine-learning predictions achieved agreement with standard-of-care phenotypic diagnostics of 89.0 and 90.3%, respectively, across twelve antibiotic agents from six major antibiotic classes. Several sources of disagreement between the algorithms were identified. Novel variants of known resistance factors and incomplete genome assembly confounded the rules-based algorithm, resulting in predictions based on gene family, rather than on knowledge of the specific variant found. Low-frequency resistance caused errors in the machine-learning algorithm because those genes were not seen or seen infrequently in the test set. We also identified an example of variability in the phenotype-based results that led to disagreement with both genotype-based methods. Genotype-based antimicrobial susceptibility testing shows great promise as a diagnostic tool, and we outline specific research goals to further refine this methodology.
A review of propeller noise prediction methodology: 1919-1994
NASA Technical Reports Server (NTRS)
Metzger, F. Bruce
1995-01-01
This report summarizes a review of the literature regarding propeller noise prediction methods. The review is divided into six sections: (1) early methods; (2) more recent methods based on earlier theory; (3) more recent methods based on the Acoustic Analogy; (4) more recent methods based on Computational Acoustics; (5) empirical methods; and (6) broadband methods. The report concludes that there are a large number of noise prediction procedures available which vary markedly in complexity. Deficiencies in accuracy of methods in many cases may be related, not to the methods themselves, but the accuracy and detail of the aerodynamic inputs used to calculate noise. The steps recommended in the report to provide accurate and easy to use prediction methods are: (1) identify reliable test data; (2) define and conduct test programs to fill gaps in the existing data base; (3) identify the most promising prediction methods; (4) evaluate promising prediction methods relative to the data base; (5) identify and correct the weaknesses in the prediction methods, including lack of user friendliness, and include features now available only in research codes; (6) confirm the accuracy of improved prediction methods to the data base; and (7) make the methods widely available and provide training in their use.
Neuropsychological tests for predicting cognitive decline in older adults
Baerresen, Kimberly M; Miller, Karen J; Hanson, Eric R; Miller, Justin S; Dye, Richelin V; Hartman, Richard E; Vermeersch, David; Small, Gary W
2015-01-01
Summary Aim To determine neuropsychological tests likely to predict cognitive decline. Methods A sample of nonconverters (n = 106) was compared with those who declined in cognitive status (n = 24). Significant univariate logistic regression prediction models were used to create multivariate logistic regression models to predict decline based on initial neuropsychological testing. Results Rey–Osterrieth Complex Figure Test (RCFT) Retention predicted conversion to mild cognitive impairment (MCI) while baseline Buschke Delay predicted conversion to Alzheimer’s disease (AD). Due to group sample size differences, additional analyses were conducted using a subsample of demographically matched nonconverters. Analyses indicated RCFT Retention predicted conversion to MCI and AD, and Buschke Delay predicted conversion to AD. Conclusion Results suggest RCFT Retention and Buschke Delay may be useful in predicting cognitive decline. PMID:26107318
PREDICTING THE EFFECTIVENESS OF CHEMICAL-PROTECTIVE CLOTHING MODEL AND TEST METHOD DEVELOPMENT
A predictive model and test method were developed for determining the chemical resistance of protective polymeric gloves exposed to liquid organic chemicals. The prediction of permeation through protective gloves by solvents was based on theories of the solution thermodynamics of...
Ołdziej, S; Czaplewski, C; Liwo, A; Chinchio, M; Nanias, M; Vila, J A; Khalili, M; Arnautova, Y A; Jagielska, A; Makowski, M; Schafroth, H D; Kaźmierkiewicz, R; Ripoll, D R; Pillardy, J; Saunders, J A; Kang, Y K; Gibson, K D; Scheraga, H A
2005-05-24
Recent improvements in the protein-structure prediction method developed in our laboratory, based on the thermodynamic hypothesis, are described. The conformational space is searched extensively at the united-residue level by using our physics-based UNRES energy function and the conformational space annealing method of global optimization. The lowest-energy coarse-grained structures are then converted to an all-atom representation and energy-minimized with the ECEPP/3 force field. The procedure was assessed in two recent blind tests of protein-structure prediction. During the first blind test, we predicted large fragments of alpha and alpha+beta proteins [60-70 residues with C(alpha) rms deviation (rmsd) <6 A]. However, for alpha+beta proteins, significant topological errors occurred despite low rmsd values. In the second exercise, we predicted whole structures of five proteins (two alpha and three alpha+beta, with sizes of 53-235 residues) with remarkably good accuracy. In particular, for the genomic target TM0487 (a 102-residue alpha+beta protein from Thermotoga maritima), we predicted the complete, topologically correct structure with 7.3-A C(alpha) rmsd. So far this protein is the largest alpha+beta protein predicted based solely on the amino acid sequence and a physics-based potential-energy function and search procedure. For target T0198, a phosphate transport system regulator PhoU from T. maritima (a 235-residue mainly alpha-helical protein), we predicted the topology of the whole six-helix bundle correctly within 8 A rmsd, except the 32 C-terminal residues, most of which form a beta-hairpin. These and other examples described in this work demonstrate significant progress in physics-based protein-structure prediction.
A comparison of fatigue life prediction methodologies for rotorcraft
NASA Technical Reports Server (NTRS)
Everett, R. A., Jr.
1990-01-01
Because of the current U.S. Army requirement that all new rotorcraft be designed to a 'six nines' reliability on fatigue life, this study was undertaken to assess the accuracy of the current safe life philosophy using the nominal stress Palmgrem-Miner linear cumulative damage rule to predict the fatigue life of rotorcraft dynamic components. It has been shown that this methodology can predict fatigue lives that differ from test lives by more than two orders of magnitude. A further objective of this work was to compare the accuracy of this methodology to another safe life method called the local strain approach as well as to a method which predicts fatigue life based solely on crack growth data. Spectrum fatigue tests were run on notched (k(sub t) = 3.2) specimens made of 4340 steel using the Felix/28 tests fairly well, being slightly on the unconservative side of the test data. The crack growth method, which is based on 'small crack' crack growth data and a crack-closure model, also predicted the fatigue lives very well with the predicted lives being slightly longer that the mean test lives but within the experimental scatter band. The crack growth model was also able to predict the change in test lives produced by the rainflow reconstructed spectra.
Field-scale prediction of enhanced DNAPL dissolution based on partitioning tracers.
Wang, Fang; Annable, Michael D; Jawitz, James W
2013-09-01
The equilibrium streamtube model (EST) has demonstrated the ability to accurately predict dense nonaqueous phase liquid (DNAPL) dissolution in laboratory experiments and numerical simulations. Here the model is applied to predict DNAPL dissolution at a tetrachloroethylene (PCE)-contaminated dry cleaner site, located in Jacksonville, Florida. The EST model is an analytical solution with field-measurable input parameters. Measured data from a field-scale partitioning tracer test were used to parameterize the EST model and the predicted PCE dissolution was compared to measured data from an in-situ ethanol flood. In addition, a simulated partitioning tracer test from a calibrated, three-dimensional, spatially explicit multiphase flow model (UTCHEM) was also used to parameterize the EST analytical solution. The EST ethanol prediction based on both the field partitioning tracer test and the simulation closely matched the total recovery well field ethanol data with Nash-Sutcliffe efficiency E=0.96 and 0.90, respectively. The EST PCE predictions showed a peak shift to earlier arrival times for models based on either field-measured or simulated partitioning tracer tests, resulting in poorer matches to the field PCE data in both cases. The peak shifts were concluded to be caused by well screen interval differences between the field tracer test and ethanol flood. Both the EST model and UTCHEM were also used to predict PCE aqueous dissolution under natural gradient conditions, which has a much less complex flow pattern than the forced-gradient double five spot used for the ethanol flood. The natural gradient EST predictions based on parameters determined from tracer tests conducted with a complex flow pattern underestimated the UTCHEM-simulated natural gradient total mass removal by 12% after 170 pore volumes of water flushing indicating that some mass was not detected by the tracers likely due to stagnation zones in the flow field. These findings highlight the important influence of well configuration and the associated flow patterns on dissolution. © 2013.
Field-scale prediction of enhanced DNAPL dissolution based on partitioning tracers
NASA Astrophysics Data System (ADS)
Wang, Fang; Annable, Michael D.; Jawitz, James W.
2013-09-01
The equilibrium streamtube model (EST) has demonstrated the ability to accurately predict dense nonaqueous phase liquid (DNAPL) dissolution in laboratory experiments and numerical simulations. Here the model is applied to predict DNAPL dissolution at a tetrachloroethylene (PCE)-contaminated dry cleaner site, located in Jacksonville, Florida. The EST model is an analytical solution with field-measurable input parameters. Measured data from a field-scale partitioning tracer test were used to parameterize the EST model and the predicted PCE dissolution was compared to measured data from an in-situ ethanol flood. In addition, a simulated partitioning tracer test from a calibrated, three-dimensional, spatially explicit multiphase flow model (UTCHEM) was also used to parameterize the EST analytical solution. The EST ethanol prediction based on both the field partitioning tracer test and the simulation closely matched the total recovery well field ethanol data with Nash-Sutcliffe efficiency E = 0.96 and 0.90, respectively. The EST PCE predictions showed a peak shift to earlier arrival times for models based on either field-measured or simulated partitioning tracer tests, resulting in poorer matches to the field PCE data in both cases. The peak shifts were concluded to be caused by well screen interval differences between the field tracer test and ethanol flood. Both the EST model and UTCHEM were also used to predict PCE aqueous dissolution under natural gradient conditions, which has a much less complex flow pattern than the forced-gradient double five spot used for the ethanol flood. The natural gradient EST predictions based on parameters determined from tracer tests conducted with a complex flow pattern underestimated the UTCHEM-simulated natural gradient total mass removal by 12% after 170 pore volumes of water flushing indicating that some mass was not detected by the tracers likely due to stagnation zones in the flow field. These findings highlight the important influence of well configuration and the associated flow patterns on dissolution.
NASA Technical Reports Server (NTRS)
Beck, L. R.; Rodriguez, M. H.; Dister, S. W.; Rodriguez, A. D.; Washino, R. K.; Roberts, D. R.; Spanner, M. A.
1997-01-01
A blind test of two remote sensing-based models for predicting adult populations of Anopheles albimanus in villages, an indicator of malaria transmission risk, was conducted in southern Chiapas, Mexico. One model was developed using a discriminant analysis approach, while the other was based on regression analysis. The models were developed in 1992 for an area around Tapachula, Chiapas, using Landsat Thematic Mapper (TM) satellite data and geographic information system functions. Using two remotely sensed landscape elements, the discriminant model was able to successfully distinguish between villages with high and low An. albimanus abundance with an overall accuracy of 90%. To test the predictive capability of the models, multitemporal TM data were used to generate a landscape map of the Huixtla area, northwest of Tapachula, where the models were used to predict risk for 40 villages. The resulting predictions were not disclosed until the end of the test. Independently, An. albimanus abundance data were collected in the 40 randomly selected villages for which the predictions had been made. These data were subsequently used to assess the models' accuracies. The discriminant model accurately predicted 79% of the high-abundance villages and 50% of the low-abundance villages, for an overall accuracy of 70%. The regression model correctly identified seven of the 10 villages with the highest mosquito abundance. This test demonstrated that remote sensing-based models generated for one area can be used successfully in another, comparable area.
Brady, Karen; Cracknell, Nina; Zulch, Helen; Mills, Daniel Simon
2018-01-01
Working dogs are selected based on predictions from tests that they will be able to perform specific tasks in often challenging environments. However, withdrawal from service in working dogs is still a big problem, bringing into question the reliability of the selection tests used to make these predictions. A systematic review was undertaken aimed at bringing together available information on the reliability and predictive validity of the assessment of behavioural characteristics used with working dogs to establish the quality of selection tests currently available for use to predict success in working dogs. The search procedures resulted in 16 papers meeting the criteria for inclusion. A large range of behaviour tests and parameters were used in the identified papers, and so behaviour tests and their underpinning constructs were grouped on the basis of their relationship with positive core affect (willingness to work, human-directed social behaviour, object-directed play tendencies) and negative core affect (human-directed aggression, approach withdrawal tendencies, sensitivity to aversives). We then examined the papers for reports of inter-rater reliability, within-session intra-rater reliability, test-retest validity and predictive validity. The review revealed a widespread lack of information relating to the reliability and validity of measures to assess behaviour and inconsistencies in terminologies, study parameters and indices of success. There is a need to standardise the reporting of these aspects of behavioural tests in order to improve the knowledge base of what characteristics are predictive of optimal performance in working dog roles, improving selection processes and reducing working dog redundancy. We suggest the use of a framework based on explaining the direct or indirect relationship of the test with core affect.
Differential Prediction Generalization in College Admissions Testing
ERIC Educational Resources Information Center
Aguinis, Herman; Culpepper, Steven A.; Pierce, Charles A.
2016-01-01
We introduce the concept of "differential prediction generalization" in the context of college admissions testing. Specifically, we assess the extent to which predicted first-year college grade point average (GPA) based on high-school grade point average (HSGPA) and SAT scores depends on a student's ethnicity and gender and whether this…
Performance of the dipstick screening test as a predictor of negative urine culture
Marques, Alexandre Gimenes; Doi, André Mario; Pasternak, Jacyr; Damascena, Márcio dos Santos; França, Carolina Nunes; Martino, Marinês Dalla Valle
2017-01-01
ABSTRACT Objective To investigate whether the urine dipstick screening test can be used to predict urine culture results. Methods A retrospective study conducted between January and December 2014 based on data from 8,587 patients with a medical order for urine dipstick test, urine sediment analysis and urine culture. Sensitivity, specificity, positive and negative predictive values were determined and ROC curve analysis was performed. Results The percentage of positive cultures was 17.5%. Nitrite had 28% sensitivity and 99% specificity, with positive and negative predictive values of 89% and 87%, respectively. Leukocyte esterase had 79% sensitivity and 84% specificity, with positive and negative predictive values of 51% and 95%, respectively. The combination of positive nitrite or positive leukocyte esterase tests had 85% sensitivity and 84% specificity, with positive and negative predictive values of 53% and 96%, respectively. Positive urinary sediment (more than ten leukocytes per microliter) had 92% sensitivity and 71% specificity, with positive and negative predictive values of 40% and 98%, respectively. The combination of nitrite positive test and positive urinary sediment had 82% sensitivity and 99% specificity, with positive and negative predictive values of 91% and 98%, respectively. The combination of nitrite or leukocyte esterase positive tests and positive urinary sediment had the highest sensitivity (94%) and specificity (84%), with positive and negative predictive values of 58% and 99%, respectively. Based on ROC curve analysis, the best indicator of positive urine culture was the combination of positives leukocyte esterase or nitrite tests and positive urinary sediment, followed by positives leukocyte and nitrite tests, positive urinary sediment alone, positive leukocyte esterase test alone, positive nitrite test alone and finally association of positives nitrite and urinary sediment (AUC: 0.845, 0.844, 0.817, 0.814, 0.635 and 0.626, respectively). Conclusion A negative urine culture can be predicted by negative dipstick test results. Therefore, this test may be a reliable predictor of negative urine culture. PMID:28444086
Blind test of physics-based prediction of protein structures.
Shell, M Scott; Ozkan, S Banu; Voelz, Vincent; Wu, Guohong Albert; Dill, Ken A
2009-02-01
We report here a multiprotein blind test of a computer method to predict native protein structures based solely on an all-atom physics-based force field. We use the AMBER 96 potential function with an implicit (GB/SA) model of solvation, combined with replica-exchange molecular-dynamics simulations. Coarse conformational sampling is performed using the zipping and assembly method (ZAM), an approach that is designed to mimic the putative physical routes of protein folding. ZAM was applied to the folding of six proteins, from 76 to 112 monomers in length, in CASP7, a community-wide blind test of protein structure prediction. Because these predictions have about the same level of accuracy as typical bioinformatics methods, and do not utilize information from databases of known native structures, this work opens up the possibility of predicting the structures of membrane proteins, synthetic peptides, or other foldable polymers, for which there is little prior knowledge of native structures. This approach may also be useful for predicting physical protein folding routes, non-native conformations, and other physical properties from amino acid sequences.
Blind Test of Physics-Based Prediction of Protein Structures
Shell, M. Scott; Ozkan, S. Banu; Voelz, Vincent; Wu, Guohong Albert; Dill, Ken A.
2009-01-01
We report here a multiprotein blind test of a computer method to predict native protein structures based solely on an all-atom physics-based force field. We use the AMBER 96 potential function with an implicit (GB/SA) model of solvation, combined with replica-exchange molecular-dynamics simulations. Coarse conformational sampling is performed using the zipping and assembly method (ZAM), an approach that is designed to mimic the putative physical routes of protein folding. ZAM was applied to the folding of six proteins, from 76 to 112 monomers in length, in CASP7, a community-wide blind test of protein structure prediction. Because these predictions have about the same level of accuracy as typical bioinformatics methods, and do not utilize information from databases of known native structures, this work opens up the possibility of predicting the structures of membrane proteins, synthetic peptides, or other foldable polymers, for which there is little prior knowledge of native structures. This approach may also be useful for predicting physical protein folding routes, non-native conformations, and other physical properties from amino acid sequences. PMID:19186130
The Development of MST Test Information for the Prediction of Test Performances
ERIC Educational Resources Information Center
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.
2017-01-01
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
Suresh, V; Parthasarathy, S
2014-01-01
We developed a support vector machine based web server called SVM-PB-Pred, to predict the Protein Block for any given amino acid sequence. The input features of SVM-PB-Pred include i) sequence profiles (PSSM) and ii) actual secondary structures (SS) from DSSP method or predicted secondary structures from NPS@ and GOR4 methods. There were three combined input features PSSM+SS(DSSP), PSSM+SS(NPS@) and PSSM+SS(GOR4) used to test and train the SVM models. Similarly, four datasets RS90, DB433, LI1264 and SP1577 were used to develop the SVM models. These four SVM models developed were tested using three different benchmarking tests namely; (i) self consistency, (ii) seven fold cross validation test and (iii) independent case test. The maximum possible prediction accuracy of ~70% was observed in self consistency test for the SVM models of both LI1264 and SP1577 datasets, where PSSM+SS(DSSP) input features was used to test. The prediction accuracies were reduced to ~53% for PSSM+SS(NPS@) and ~43% for PSSM+SS(GOR4) in independent case test, for the SVM models of above two same datasets. Using our method, it is possible to predict the protein block letters for any query protein sequence with ~53% accuracy, when the SP1577 dataset and predicted secondary structure from NPS@ server were used. The SVM-PB-Pred server can be freely accessed through http://bioinfo.bdu.ac.in/~svmpbpred.
Taylor, S
2011-01-01
Community attitudes research regarding genetic issues is important when contemplating the potential value and utilisation of predictive testing for common diseases in mainstream health services. This article aims to report population-based attitudes and discuss their relevance to integrating genetic services in primary health contexts. Men's and women's attitudes were investigated via population-based omnibus telephone survey in Queensland, Australia. Randomly selected adults (n = 1,230) with a mean age of 48.8 years were interviewed regarding perceptions of genetic determinants of health; benefits of genetic testing that predict 'certain' versus 'probable' future illness; and concern, if any, regarding potential misuse of genetic test information. Most (75%) respondents believed genetic factors significantly influenced health status; 85% regarded genetic testing positively although attitudes varied with age. Risk-based information was less valued than certainty-based information, but women valued risk information significantly more highly than men. Respondents reported 'concern' (44%) and 'no concern' (47%) regarding potential misuse of genetic information. This study contributes important population-based data as most research has involved selected individuals closely impacted by genetic disorders. While community attitudes were positive regarding genetic testing, genetic literacy is important to establish. The nature of gender differences regarding risk perception merits further study and has policy and service implications. Community concern about potential genetic discrimination must be addressed if health benefits of testing are to be maximised. Larger questions remain in scientific, policy, service delivery, and professional practice domains before predictive testing for common disorders is efficacious in mainstream health care. Copyright © 2011 S. Karger AG, Basel.
Powell, Rachael; Pattison, Helen M; Francis, Jill J
2016-01-01
Chlamydia is a common sexually transmitted infection that has potentially serious consequences unless detected and treated early. The health service in the UK offers clinic-based testing for chlamydia but uptake is low. Identifying the predictors of testing behaviours may inform interventions to increase uptake. Self-tests for chlamydia may facilitate testing and treatment in people who avoid clinic-based testing. Self-testing and being tested by a health care professional (HCP) involve two contrasting contexts that may influence testing behaviour. However, little is known about how predictors of behaviour differ as a function of context. In this study, theoretical models of behaviour were used to assess factors that may predict intention to test in two different contexts: self-testing and being tested by a HCP. Individuals searching for or reading about chlamydia testing online were recruited using Google Adwords. Participants completed an online questionnaire that addressed previous testing behaviour and measured constructs of the Theory of Planned Behaviour and Protection Motivation Theory, which propose a total of eight possible predictors of intention. The questionnaire was completed by 310 participants. Sufficient data for multiple regression were provided by 102 and 118 respondents for self-testing and testing by a HCP respectively. Intention to self-test was predicted by vulnerability and self-efficacy, with a trend-level effect for response efficacy. Intention to be tested by a HCP was predicted by vulnerability, attitude and subjective norm. Thus, intentions to carry out two testing behaviours with very similar goals can have different predictors depending on test context. We conclude that interventions to increase self-testing should be based on evidence specifically related to test context.
Predicting preference-based SF-6D index scores from the SF-8 health survey.
Wang, P; Fu, A Z; Wee, H L; Lee, J; Tai, E S; Thumboo, J; Luo, N
2013-09-01
To develop and test functions for predicting the preference-based SF-6D index scores from the SF-8 health survey. This study was a secondary analysis of data collected in a population health survey in which respondents (n = 7,529) completed both the SF-36 and the SF-8 questionnaires. We examined seven ordinary least-square estimators for their performance in predicting SF-6D scores from the SF-8 at both the individual and the group levels. In general, all functions performed similarly well in predicting SF-6D scores, and the predictions at the group level were better than predictions at the individual level. At the individual level, 42.5-51.5% of prediction errors were smaller than the minimally important difference (MID) of the SF-6D scores, depending on the function specifications, while almost all prediction errors of the tested functions were smaller than the MID of SF-6D at the group level. At both individual and group levels, the tested functions predicted lower than actual scores at the higher end of the SF-6D scale. Our study developed functions to generate preference-based SF-6D index scores from the SF-8 health survey, the first of its kind. Further research is needed to evaluate the performance and validity of the prediction functions.
Radiomics biomarkers for accurate tumor progression prediction of oropharyngeal cancer
NASA Astrophysics Data System (ADS)
Hadjiiski, Lubomir; Chan, Heang-Ping; Cha, Kenny H.; Srinivasan, Ashok; Wei, Jun; Zhou, Chuan; Prince, Mark; Papagerakis, Silvana
2017-03-01
Accurate tumor progression prediction for oropharyngeal cancers is crucial for identifying patients who would best be treated with optimized treatment and therefore minimize the risk of under- or over-treatment. An objective decision support system that can merge the available radiomics, histopathologic and molecular biomarkers in a predictive model based on statistical outcomes of previous cases and machine learning may assist clinicians in making more accurate assessment of oropharyngeal tumor progression. In this study, we evaluated the feasibility of developing individual and combined predictive models based on quantitative image analysis from radiomics, histopathology and molecular biomarkers for oropharyngeal tumor progression prediction. With IRB approval, 31, 84, and 127 patients with head and neck CT (CT-HN), tumor tissue microarrays (TMAs) and molecular biomarker expressions, respectively, were collected. For 8 of the patients all 3 types of biomarkers were available and they were sequestered in a test set. The CT-HN lesions were automatically segmented using our level sets based method. Morphological, texture and molecular based features were extracted from CT-HN and TMA images, and selected features were merged by a neural network. The classification accuracy was quantified using the area under the ROC curve (AUC). Test AUCs of 0.87, 0.74, and 0.71 were obtained with the individual predictive models based on radiomics, histopathologic, and molecular features, respectively. Combining the radiomics and molecular models increased the test AUC to 0.90. Combining all 3 models increased the test AUC further to 0.94. This preliminary study demonstrates that the individual domains of biomarkers are useful and the integrated multi-domain approach is most promising for tumor progression prediction.
Chaitanya, Lakshmi; Breslin, Krystal; Zuñiga, Sofia; Wirken, Laura; Pośpiech, Ewelina; Kukla-Bartoszek, Magdalena; Sijen, Titia; Knijff, Peter de; Liu, Fan; Branicki, Wojciech; Kayser, Manfred; Walsh, Susan
2018-07-01
Forensic DNA Phenotyping (FDP), i.e. the prediction of human externally visible traits from DNA, has become a fast growing subfield within forensic genetics due to the intelligence information it can provide from DNA traces. FDP outcomes can help focus police investigations in search of unknown perpetrators, who are generally unidentifiable with standard DNA profiling. Therefore, we previously developed and forensically validated the IrisPlex DNA test system for eye colour prediction and the HIrisPlex system for combined eye and hair colour prediction from DNA traces. Here we introduce and forensically validate the HIrisPlex-S DNA test system (S for skin) for the simultaneous prediction of eye, hair, and skin colour from trace DNA. This FDP system consists of two SNaPshot-based multiplex assays targeting a total of 41 SNPs via a novel multiplex assay for 17 skin colour predictive SNPs and the previous HIrisPlex assay for 24 eye and hair colour predictive SNPs, 19 of which also contribute to skin colour prediction. The HIrisPlex-S system further comprises three statistical prediction models, the previously developed IrisPlex model for eye colour prediction based on 6 SNPs, the previous HIrisPlex model for hair colour prediction based on 22 SNPs, and the recently introduced HIrisPlex-S model for skin colour prediction based on 36 SNPs. In the forensic developmental validation testing, the novel 17-plex assay performed in full agreement with the Scientific Working Group on DNA Analysis Methods (SWGDAM) guidelines, as previously shown for the 24-plex assay. Sensitivity testing of the 17-plex assay revealed complete SNP profiles from as little as 63 pg of input DNA, equalling the previously demonstrated sensitivity threshold of the 24-plex HIrisPlex assay. Testing of simulated forensic casework samples such as blood, semen, saliva stains, of inhibited DNA samples, of low quantity touch (trace) DNA samples, and of artificially degraded DNA samples as well as concordance testing, demonstrated the robustness, efficiency, and forensic suitability of the new 17-plex assay, as previously shown for the 24-plex assay. Finally, we provide an update to the publically available HIrisPlex website https://hirisplex.erasmusmc.nl/, now allowing the estimation of individual probabilities for 3 eye, 4 hair, and 5 skin colour categories from HIrisPlex-S input genotypes. The HIrisPlex-S DNA test represents the first forensically validated tool for skin colour prediction, and reflects the first forensically validated tool for simultaneous eye, hair and skin colour prediction from DNA. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Westphal, T.; Nijssen, R. P. L.
2014-12-01
The effect of Constant Life Diagram (CLD) formulation on the fatigue life prediction under variable amplitude (VA) loading was investigated based on variable amplitude tests using three different load spectra representative for wind turbine loading. Next to the Wisper and WisperX spectra, the recently developed NewWisper2 spectrum was used. Based on these variable amplitude fatigue results the prediction accuracy of 4 CLD formulations is investigated. In the study a piecewise linear CLD based on the S-N curves for 9 load ratios compares favourably in terms of prediction accuracy and conservativeness. For the specific laminate used in this study Boerstra's Multislope model provides a good alternative at reduced test effort.
Prediction of failure pressure and leak rate of stress corrosion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majumdar, S.; Kasza, K.; Park, J. Y.
2002-06-24
An ''equivalent rectangular crack'' approach was employed to predict rupture pressures and leak rates through laboratory generated stress corrosion cracks and steam generator tubes removed from the McGuire Nuclear Station. Specimen flaws were sized by post-test fractography in addition to a pre-test advanced eddy current technique. The predicted and observed test data on rupture and leak rate are compared. In general, the test failure pressures and leak rates are closer to those predicted on the basis of fractography than on nondestructive evaluation (NDE). However, the predictions based on NDE results are encouraging, particularly because they have the potential to determinemore » a more detailed geometry of ligamented cracks, from which failure pressure and leak rate can be more accurately predicted. One test specimen displayed a time-dependent increase of leak rate under constant pressure.« less
Strainrange partitioning behavior of the nickel-base superalloys, Rene' 80 and in 100
NASA Technical Reports Server (NTRS)
Halford, G. R.; Nachtigall, A. J.
1978-01-01
A study was made to assess the ability of the method of Strainrange Partitioning (SRP) to both correlate and predict high-temperature, low cycle fatigue lives of nickel base superalloys for gas turbine applications. The partitioned strainrange versus life relationships for uncoated Rene' 80 and cast IN 100 were also determined from the ductility normalized-Strainrange Partitioning equations. These were used to predict the cyclic lives of the baseline tests. The life predictability of the method was verified for cast IN 100 by applying the baseline results to the cyclic life prediction of a series of complex strain cycling tests with multiple hold periods at constant strain. It was concluded that the method of SRP can correlate and predict the cyclic lives of laboratory specimens of the nickel base superalloys evaluated in this program.
Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G
2014-09-01
The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).
NASA Astrophysics Data System (ADS)
Naro, Daniel; Rummel, Christian; Schindler, Kaspar; Andrzejak, Ralph G.
2014-09-01
The rank-based nonlinear predictability score was recently introduced as a test for determinism in point processes. We here adapt this measure to time series sampled from time-continuous flows. We use noisy Lorenz signals to compare this approach against a classical amplitude-based nonlinear prediction error. Both measures show an almost identical robustness against Gaussian white noise. In contrast, when the amplitude distribution of the noise has a narrower central peak and heavier tails than the normal distribution, the rank-based nonlinear predictability score outperforms the amplitude-based nonlinear prediction error. For this type of noise, the nonlinear predictability score has a higher sensitivity for deterministic structure in noisy signals. It also yields a higher statistical power in a surrogate test of the null hypothesis of linear stochastic correlated signals. We show the high relevance of this improved performance in an application to electroencephalographic (EEG) recordings from epilepsy patients. Here the nonlinear predictability score again appears of higher sensitivity to nonrandomness. Importantly, it yields an improved contrast between signals recorded from brain areas where the first ictal EEG signal changes were detected (focal EEG signals) versus signals recorded from brain areas that were not involved at seizure onset (nonfocal EEG signals).
ERIC Educational Resources Information Center
Yilmaz, Diba; Tekkaya, Ceren; Sungur, Semra
2011-01-01
The present study examined the comparative effects of a prediction/discussion-based learning cycle, conceptual change text (CCT), and traditional instructions on students' understanding of genetics concepts. A quasi-experimental research design of the pre-test-post-test non-equivalent control group was adopted. The three intact classes, taught by…
NASA Astrophysics Data System (ADS)
khawaldeh, Salem A. Al
2013-07-01
Background and purpose: The purpose of this study was to investigate the comparative effects of a prediction/discussion-based learning cycle (HPD-LC), conceptual change text (CCT) and traditional instruction on 10th grade students' understanding of genetics concepts. Sample: Participants were 112 10th basic grade male students in three classes of the same school located in an urban area. The three classes taught by the same biology teacher were randomly assigned as a prediction/discussion-based learning cycle class (n = 39), conceptual change text class (n = 37) and traditional class (n = 36). Design and method: A quasi-experimental research design of pre-test-post-test non-equivalent control group was adopted. Participants completed the Genetics Concept Test as pre-test-post-test, to examine the effects of instructional strategies on their genetics understanding. Pre-test scores and Test of Logical Thinking scores were used as covariates. Results: The analysis of covariance showed a statistically significant difference between the experimental and control groups in the favor of experimental groups after treatment. However, no statistically significant difference between the experimental groups (HPD-LC versus CCT instruction) was found. Conclusions: Overall, the findings of this study support the use of the prediction/discussion-based learning cycle and conceptual change text in both research and teaching. The findings may be useful for improving classroom practices in teaching science concepts and for the development of suitable materials promoting students' understanding of science.
Wilson, John Thomas
2000-01-01
A mathematical technique of estimating low-flow frequencies from base-flow measurements was evaluated by using data for streams in Indiana. Low-flow frequencies at low- flow partial-record stations were estimated by relating base-flow measurements to concurrent daily flows at nearby streamflow-gaging stations (index stations) for which low-flowfrequency curves had been developed. A network of long-term streamflow-gaging stations in Indiana provided a sample of sites with observed low-flow frequencies. Observed values of 7-day, 10-year low flow and 7-day, 2-year low flow were compared to predicted values to evaluate the accuracy of the method. Five test cases were used to evaluate the method under a variety of conditions in which the location of the index station and its drainage area varied relative to the partial-record station. A total of 141 pairs of streamflow-gaging stations were used in the five test cases. Four of the test cases used one index station, the fifth test case used two index stations. The number of base-flow measurements was varied for each test case to see if the accuracy of the method was affected by the number of measurements used. The most accurate and least variable results were produced when two index stations on the same stream or tributaries of the partial-record station were used. All but one value of the predicted 7-day, 10-year low flow were within 15 percent of the values observed for the long-term continuous record, and all of the predicted values of the 7-day, 2-year lowflow were within 15 percent of the observed values. This apparent accuracy, to some extent, may be a result of the small sample set of 15. Of the four test cases that used one index station, the most accurate and least variable results were produced in the test case where the index station and partial-record station were on the same stream or on streams tributary to each other and where the index station had a larger drainage area than the partial-record station. In that test case, the method tended to over predict, based on the median relative error. In 23 of 28 test pairs, the predicted 7-day, 10-year low flow was within 15 percent of the observed value; in 26 of 28 test pairs, the predicted 7-day, 2-year low flow was within 15 percent of the observed value. When the index station and partial-record station were on the same stream or streams tributary to each other and the index station had a smaller drainage area than the partial-record station, the method tended to under predict the low-flow frequencies. Nineteen of 28 predicted values of the 7-day, 10-year low flow were within 15 percent of the observed values. Twenty-five of 28 predicted values of the 7-day, 2-year low flow were within 15 percent of the observed values. When the index station and the partial-record station were on different streams, the method tended to under predict regardless of whether the index station had a larger or smaller drainage area than that of the partial-record station. Also, the variability of the relative error of estimate was greatest for the test cases that used index stations and partial-record stations from different streams. This variability, in part, may be caused by using more streamflow-gaging stations with small low-flow frequencies in these test cases. A small difference in the predicted and observed values can equate to a large relative error when dealing with stations that have small low-flow frequencies. In the test cases that used one index station, the method tended to predict smaller low-flow frequencies as the number of base-flow measurements was reduced from 20 to 5. Overall, the average relative error of estimate and the variability of the predicted values increased as the number of base-flow measurements was reduced.
Accuracy and Calibration of High Explosive Thermodynamic Equations of State
2010-08-01
physics descriptions, but can also mean increased calibration complexity. A generalized extent of aluminum reaction, the Jones-Wilkins-Lee ( JWL ) based...predictions compared to experiments 3 3 PAX-30 JWL and JWLB cylinder test predictions compared to experiments 4 4 PAX-29 JWL and JWLB cylinder test...predictions compared to experiments 5 5 Experiment and modeling comparisons for HMX/AI 85/15 7 TABLES 1 LX-14 JWL and JWLB cylinder test velocity
Binedell, J; Soldan, J R; Scourfield, J; Harper, P S
1996-01-01
Adolescents who are actively requesting Huntington's predictive testing of their own accord pose a dilemma to those providing testing. In the absence of empirical evidence as regards the impact of genetic testing on minors, current policy and guidelines, based on the ethical principles of non-maleficence and respect for individual autonomy and confidentiality, generally exclude the testing of minors. It is argued that adherence to an age based exclusion criterion in Huntington's disease predictive testing protocols is out of step with trends in UK case law concerning minors' consent to medical treatment. Furthermore, contributions from developmental psychology and research into adolescents' decision making competence suggest that adolescents can make informed choices about their health and personal lives. Criteria for developing an assessment approach to such requests are put forward and the implications of a case by case evaluation of competence to consent in terms of clinicians' tolerance for uncertainty are discussed. PMID:8950670
HIV RNA testing in the context of nonoccupational postexposure prophylaxis.
Roland, Michelle E; Elbeik, Tarek A; Kahn, James O; Bamberger, Joshua D; Coates, Thomas J; Krone, Melissa R; Katz, Mitchell H; Busch, Michael P; Martin, Jeffrey N
2004-08-01
The specificity and positive predictive value of human immunodeficiency virus (HIV) RNA assays have not been evaluated in the setting of postexposure prophylaxis (PEP). Plasma from subjects enrolled in a nonoccupational PEP study was tested with 2 branched-chain DNA (bDNA) assays, 2 polymerase chain reaction (PCR) assays, and a transcription-mediated amplification (TMA) assay. Assay specificity and positive predictive value were determined for subjects who remained negative for HIV antibody for >or=3 months. In 329 subjects examined, the lowest specificities (90.1%-93.7%) were seen for bDNA testing performed in real time. The highest specificities were seen with batched bDNA version 3.0 (99.1%), standard PCR (99.4%), ultrasensitive PCR (100%), and TMA (99.6%) testing. Only the 2 assays with the highest specificities had positive predictive values >40%. For the bDNA assays, increasing the cutoff point at which a test is called positive (e.g., from 50 copies/mL to 500 copies/mL for version 3.0) increased both specificity and positive predictive values to 100%. The positive predictive value of HIV RNA assays in individuals presenting for PEP is unacceptably low for bDNA-based testing and possibly acceptable for PCR- and TMA-based testing. Routine use of HIV RNA assays in such individuals is not recommended.
Subscores and Validity. Research Report. ETS RR-08-64
ERIC Educational Resources Information Center
Haberman, Shelby J.
2008-01-01
In educational testing, subscores may be provided based on a portion of the items from a larger test. One consideration in evaluation of such subscores is their ability to predict a criterion score. Two limitations on prediction exist. The first, which is well known, is that the coefficient of determination for linear prediction of the criterion…
Hirota, Morihiko; Ashikaga, Takao; Kouzuki, Hirokazu
2018-04-01
It is important to predict the potential of cosmetic ingredients to cause skin sensitization, and in accordance with the European Union cosmetic directive for the replacement of animal tests, several in vitro tests based on the adverse outcome pathway have been developed for hazard identification, such as the direct peptide reactivity assay, KeratinoSens™ and the human cell line activation test. Here, we describe the development of an artificial neural network (ANN) prediction model for skin sensitization risk assessment based on the integrated testing strategy concept, using direct peptide reactivity assay, KeratinoSens™, human cell line activation test and an in silico or structure alert parameter. We first investigated the relationship between published murine local lymph node assay EC3 values, which represent skin sensitization potency, and in vitro test results using a panel of about 134 chemicals for which all the required data were available. Predictions based on ANN analysis using combinations of parameters from all three in vitro tests showed a good correlation with local lymph node assay EC3 values. However, when the ANN model was applied to a testing set of 28 chemicals that had not been included in the training set, predicted EC3s were overestimated for some chemicals. Incorporation of an additional in silico or structure alert descriptor (obtained with TIMES-M or Toxtree software) in the ANN model improved the results. Our findings suggest that the ANN model based on the integrated testing strategy concept could be useful for evaluating the skin sensitization potential. Copyright © 2017 John Wiley & Sons, Ltd.
A Flight Prediction for Performance of the SWAS Solar Array Deployment Mechanism
NASA Technical Reports Server (NTRS)
Seniderman, Gary; Daniel, Walter K.
1999-01-01
The focus of this paper is a comparison of ground-based solar array deployment tests with the on-orbit deployment. The discussion includes a summary of the mechanisms involved and the correlation of a dynamics model with ground based test results. Some of the unique characteristics of the mechanisms are explained through the analysis of force and angle data acquired from the test deployments. The correlated dynamics model is then used to predict the performance of the system in its flight application.
On-Line, Self-Learning, Predictive Tool for Determining Payload Thermal Response
NASA Technical Reports Server (NTRS)
Jen, Chian-Li; Tilwick, Leon
2000-01-01
This paper will present the results of a joint ManTech / Goddard R&D effort, currently under way, to develop and test a computer based, on-line, predictive simulation model for use by facility operators to predict the thermal response of a payload during thermal vacuum testing. Thermal response was identified as an area that could benefit from the algorithms developed by Dr. Jeri for complex computer simulations. Most thermal vacuum test setups are unique since no two payloads have the same thermal properties. This requires that the operators depend on their past experiences to conduct the test which requires time for them to learn how the payload responds while at the same time limiting any risk of exceeding hot or cold temperature limits. The predictive tool being developed is intended to be used with the new Thermal Vacuum Data System (TVDS) developed at Goddard for the Thermal Vacuum Test Operations group. This model can learn the thermal response of the payload by reading a few data points from the TVDS, accepting the payload's current temperature as the initial condition for prediction. The model can then be used as a predictive tool to estimate the future payload temperatures according to a predetermined shroud temperature profile. If the error of prediction is too big, the model can be asked to re-learn the new situation on-line in real-time and give a new prediction. Based on some preliminary tests, we feel this predictive model can forecast the payload temperature of the entire test cycle within 5 degrees Celsius after it has learned 3 times during the beginning of the test. The tool will allow the operator to play "what-if' experiments to decide what is his best shroud temperature set-point control strategy. This tool will save money by minimizing guess work and optimizing transitions as well as making the testing process safer and easier to conduct.
A method for modeling aquatic toxicity date based on the theory of accelerated life testing and a procedure for maximum likelihood fitting the proposed model is presented. he procedure is computerized as software, which can predict chronic lethality of chemicals using data from a...
Albitar, Maher; Ma, Wanlong; Lund, Lars; Shahbaba, Babak; Uchio, Edward; Feddersen, Søren; Moylan, Donald; Wojno, Kirk; Shore, Neal
2018-03-01
Distinguishing between low- and high-grade prostate cancers (PCa) is important, but biopsy may underestimate the actual grade of cancer. We have previously shown that urine/plasma-based prostate-specific biomarkers can predict high grade PCa. Our objective was to determine the accuracy of a test using cell-free RNA levels of biomarkers in predicting prostatectomy results. This multicenter community-based prospective study was conducted using urine/blood samples collected from 306 patients. All recruited patients were treatment-naïve, without metastases, and had been biopsied, designated a Gleason Score (GS) based on biopsy, and assigned to prostatectomy prior to participation in the study. The primary outcome measure was the urine/plasma test accuracy in predicting high grade PCa on prostatectomy compared with biopsy findings. Sensitivity and specificity were calculated using standard formulas, while comparisons between groups were performed using the Wilcoxon Rank Sum, Kruskal-Wallis, Chi-Square, and Fisher's exact test. GS as assigned by standard 10-12 core biopsies was 3 + 3 in 90 (29.4%), 3 + 4 in 122 (39.8%), 4 + 3 in 50 (16.3%), and > 4 + 3 in 44 (14.4%) patients. The urine/plasma assay confirmed a previous validation and was highly accurate in predicting the presence of high-grade PCa (Gleason ≥3 + 4) with sensitivity between 88% and 95% as verified by prostatectomy findings. GS was upgraded after prostatectomy in 27% of patients and downgraded in 12% of patients. This plasma/urine biomarker test accurately predicts high grade cancer as determined by prostatectomy with a sensitivity at 92-97%, while the sensitivity of core biopsies was 78%. © 2018 Wiley Periodicals, Inc.
The Theory of Planned Behavior as a Predictor of HIV Testing Intention.
Ayodele, Olabode
2017-03-01
This investigation tests the theory of planned behavior (TPB) as a predictor of HIV testing intention among Nigerian university undergraduate students. A cross-sectional study of 392 students was conducted using a self-administered structured questionnaire that measured socio-demographics, perceived risk of human immunodeficiency virus (HIV) infection, and TPB constructs. Analysis was based on 273 students who had never been tested for HIV. Hierarchical multiple regression analysis assessed the applicability of the TPB in predicting HIV testing intention and additional predictive value of perceived risk of HIV infection. The prediction model containing TPB constructs explained 35% of the variance in HIV testing intention, with attitude and perceived behavioral control making significant and unique contributions to intention. Perceived risk of HIV infection contributed marginally (2%) but significantly to the final prediction model. Findings supported the TPB in predicting HIV testing intention. Although future studies must determine the generalizability of these results, the findings highlight the importance of perceived behavioral control, attitude, and perceived risk of HIV infection in the prediction of HIV testing intention among students who have not previously tested for HIV.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-04-30
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.
Hypothesis testing and earthquake prediction.
Jackson, D D
1996-01-01
Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions. PMID:11607663
Takenouchi, Osamu; Fukui, Shiho; Okamoto, Kenji; Kurotani, Satoru; Imai, Noriyasu; Fujishiro, Miyuki; Kyotani, Daiki; Kato, Yoshinao; Kasahara, Toshihiko; Fujita, Masaharu; Toyoda, Akemi; Sekiya, Daisuke; Watanabe, Shinichi; Seto, Hirokazu; Hirota, Morihiko; Ashikaga, Takao; Miyazawa, Masaaki
2015-11-01
To develop a testing strategy incorporating the human cell line activation test (h-CLAT), direct peptide reactivity assay (DPRA) and DEREK, we created an expanded data set of 139 chemicals (102 sensitizers and 37 non-sensitizers) by combining the existing data set of 101 chemicals through the collaborative projects of Japan Cosmetic Industry Association. Of the additional 38 chemicals, 15 chemicals with relatively low water solubility (log Kow > 3.5) were selected to clarify the limitation of testing strategies regarding the lipophilic chemicals. Predictivities of the h-CLAT, DPRA and DEREK, and the combinations thereof were evaluated by comparison to results of the local lymph node assay. When evaluating 139 chemicals using combinations of three methods based on integrated testing strategy (ITS) concept (ITS-based test battery) and a sequential testing strategy (STS) weighing the predictive performance of the h-CLAT and DPRA, overall similar predictivities were found as before on the 101 chemical data set. An analysis of false negative chemicals suggested a major limitation of our strategies was the testing of low water-soluble chemicals. When excluded the negative results for chemicals with log Kow > 3.5, the sensitivity and accuracy of ITS improved to 97% (91 of 94 chemicals) and 89% (114 of 128). Likewise, the sensitivity and accuracy of STS to 98% (92 of 94) and 85% (111 of 129). Moreover, the ITS and STS also showed good correlation with local lymph node assay on three potency classifications, yielding accuracies of 74% (ITS) and 73% (STS). Thus, the inclusion of log Kow in analysis could give both strategies a higher predictive performance. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Lopes, Leonard; Redonnet, Stephane; Imamura, Taro; Ikeda, Tomoaki; Zawodny, Nikolas; Cunha, Guilherme
2015-01-01
The usage of Computational Fluid Dynamics (CFD) in noise prediction typically has been a two part process: accurately predicting the flow conditions in the near-field and then propagating the noise from the near-field to the observer. Due to the increase in computing power and the cost benefit when weighed against wind tunnel testing, the usage of CFD to estimate the local flow field of complex geometrical structures has become more routine. Recently, the Benchmark problems in Airframe Noise Computation (BANC) workshops have provided a community focus on accurately simulating the local flow field near the body with various CFD approaches. However, to date, little effort has been given into assessing the impact of the propagation phase of noise prediction. This paper includes results from the BANC-III workshop which explores variability in the propagation phase of CFD-based noise prediction. This includes two test cases: an analytical solution of a quadrupole source near a sphere and a computational solution around a nose landing gear. Agreement between three codes was very good for the analytic test case, but CFD-based noise predictions indicate that the propagation phase can introduce 3dB or more of variability in noise predictions.
Tsai, Keng-Chang; Jian, Jhih-Wei; Yang, Ei-Wen; Hsu, Po-Chiang; Peng, Hung-Pin; Chen, Ching-Tai; Chen, Jun-Bo; Chang, Jeng-Yih; Hsu, Wen-Lian; Yang, An-Suei
2012-01-01
Non-covalent protein-carbohydrate interactions mediate molecular targeting in many biological processes. Prediction of non-covalent carbohydrate binding sites on protein surfaces not only provides insights into the functions of the query proteins; information on key carbohydrate-binding residues could suggest site-directed mutagenesis experiments, design therapeutics targeting carbohydrate-binding proteins, and provide guidance in engineering protein-carbohydrate interactions. In this work, we show that non-covalent carbohydrate binding sites on protein surfaces can be predicted with relatively high accuracy when the query protein structures are known. The prediction capabilities were based on a novel encoding scheme of the three-dimensional probability density maps describing the distributions of 36 non-covalent interacting atom types around protein surfaces. One machine learning model was trained for each of the 30 protein atom types. The machine learning algorithms predicted tentative carbohydrate binding sites on query proteins by recognizing the characteristic interacting atom distribution patterns specific for carbohydrate binding sites from known protein structures. The prediction results for all protein atom types were integrated into surface patches as tentative carbohydrate binding sites based on normalized prediction confidence level. The prediction capabilities of the predictors were benchmarked by a 10-fold cross validation on 497 non-redundant proteins with known carbohydrate binding sites. The predictors were further tested on an independent test set with 108 proteins. The residue-based Matthews correlation coefficient (MCC) for the independent test was 0.45, with prediction precision and sensitivity (or recall) of 0.45 and 0.49 respectively. In addition, 111 unbound carbohydrate-binding protein structures for which the structures were determined in the absence of the carbohydrate ligands were predicted with the trained predictors. The overall prediction MCC was 0.49. Independent tests on anti-carbohydrate antibodies showed that the carbohydrate antigen binding sites were predicted with comparable accuracy. These results demonstrate that the predictors are among the best in carbohydrate binding site predictions to date. PMID:22848404
Space vehicle engine and heat shield environment review. Volume 1: Engineering analysis
NASA Technical Reports Server (NTRS)
Mcanelly, W. B.; Young, C. T. K.
1973-01-01
Methods for predicting the base heating characteristics of a multiple rocket engine installation are discussed. The environmental data is applied to the design of adequate protection system for the engine components. The methods for predicting the base region thermal environment are categorized as: (1) scale model testing, (2) extrapolation of previous and related flight test results, and (3) semiempirical analytical techniques.
Horita, Kotomi; Horita, Daisuke; Tomita, Hiroyuki; Yasoshima, Mitsue; Yagami, Akiko; Matsunaga, Kayoko
2017-05-01
Animal testing for cosmetics was banned in the European Union (EU) in 2013; therefore, human tests to predict and ensure skin safety such as the patch test or usage test are now in demand in Japan as well as in the EU. In order to investigate the effects of different bases on the findings of tests to predict skin irritation, we performed patch testing (PT) and the repeated application test (RAT) using sodium lauryl sulfate (SLS), a well-known irritant, dissolved in 6 different base agents to examine the effects of these bases on skin irritation by SLS. The bases for PT were distilled water, 50% ethanol, 100% ethanol, a gel containing 50% ethanol, white petrolatum, and hydrophilic cream. The concentrations of SLS were 0.2% and 0.5%. Twelve different base combinations were applied to the normal back skin of 19 individuals for 24h. RAT was performed with distilled water, 50% ethanol, 100% ethanol, a gel containing 50% ethanol, white petrolatum, and hydrophilic cream containing SLS at concentrations of 0.2%, 2%, and 5%, being applied to the arms of the same PT subjects. The test preparation of each base was applied at the same site, with 0.2% SLS being used in the first week, 2% SLS in the following week, and 5% SLS in the final week. The results of PT revealed that skin irritation scores varied when SLS at the same concentration was dissolved in a different base. The results of RAT showed that although skin irritation appeared with every base at a concentration of 5%, the positive rate was approximately the same. In conclusion, our results suggest that skin irritation elicited in PT depends on the base, while in RAT, it does not depend on the type of base employed. Copyright © 2017 Elsevier B.V. All rights reserved.
Maeda, Yosuke; Hirosaki, Haruka; Yamanaka, Hidenori; Takeyoshi, Masahiro
2018-05-23
Photoallergic dermatitis, caused by pharmaceuticals and other consumer products, is a very important issue in human health. However, S10 guidelines of the International Conference on Harmonization do not recommend the existing prediction methods for photoallergy because of their low predictability in human cases. We applied local lymph node assay (LLNA), a reliable, quantitative skin sensitization prediction test, to develop a new photoallergy prediction method. This method involves a three-step approach: (1) ultraviolet (UV) absorption analysis; (2) determination of no observed adverse effect level for skin phototoxicity based on LLNA; and (3) photoallergy evaluation based on LLNA. Photoallergic potential of chemicals was evaluated by comparing lymph node cell proliferation among groups treated with chemicals with minimal effect levels of skin sensitization and skin phototoxicity under UV irradiation (UV+) or non-UV irradiation (UV-). A case showing significant difference (P < .05) in lymph node cell proliferation rates between UV- and UV+ groups was considered positive for photoallergic reaction. After testing 13 chemicals, seven human photoallergens tested positive and the other six, with no evidence of causing photoallergic dermatitis or UV absorption, tested negative. Among these chemicals, both doxycycline hydrochloride and minocycline hydrochloride were tetracycline antibiotics with different photoallergic properties, and the new method clearly distinguished between the photoallergic properties of these chemicals. These findings suggested high predictability of our method; therefore, it is promising and effective in predicting human photoallergens. Copyright © 2018 John Wiley & Sons, Ltd.
Action perception as hypothesis testing.
Donnarumma, Francesco; Costantini, Marcello; Ambrosini, Ettore; Friston, Karl; Pezzulo, Giovanni
2017-04-01
We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Proposed Framework for Determining Added Mass of Orion Drogue Parachutes
NASA Technical Reports Server (NTRS)
Fraire, Usbaldo, Jr.; Dearman, James; Morris, Aaron
2011-01-01
The Crew Exploration Vehicle (CEV) Parachute Assembly System (CPAS) project is executing a program to qualify a parachute system for a next generation human spacecraft. Part of the qualification process involves predicting parachute riser tension during system descent with flight simulations. Human rating the CPAS hardware requires a high degree of confidence in the simulation models used to predict parachute loads. However, uncertainty exists in the heritage added mass models used for loads predictions due to a lack of supporting documentation and data. Even though CPAS anchors flight simulation loads predictions to flight tests, extrapolation of these models outside the test regime carries the risk of producing non-bounding loads. A set of equations based on empirically derived functions of skirt radius is recommended as the simplest and most viable method to test and derive an enhanced added mass model for an inflating parachute. This will increase confidence in the capability to predict parachute loads. The selected equations are based on those published in A Simplified Dynamic Model of Parachute Inflation by Dean Wolf. An Ames 80x120 wind tunnel test campaign is recommended to acquire the reefing line tension and canopy photogrammetric data needed to quantify the terms in the Wolf equations and reduce uncertainties in parachute loads predictions. Once the campaign is completed, the Wolf equations can be used to predict loads in a typical CPAS Drogue Flight test. Comprehensive descriptions of added mass test techniques from the Apollo Era to the current CPAS project are included for reference.
Risk-Based, Hypothesis-Driven Framework for Hydrological Field Campaigns with Case Studies
NASA Astrophysics Data System (ADS)
Harken, B.; Rubin, Y.
2014-12-01
There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration or plume travel time. These predictions often have significant bearing on a decision that must be made. Examples include: how to allocate limited remediation resources between contaminated groundwater sites or where to place a waste repository site. Answering such questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in EPM predictions stems from uncertainty in model parameters, which can be reduced by measurements taken in field campaigns. The costly nature of field measurements motivates a rational basis for determining a measurement strategy that is optimal with respect to the uncertainty in the EPM prediction. The tool of hypothesis testing allows this uncertainty to be quantified by computing the significance of the test resulting from a proposed field campaign. The significance of the test gives a rational basis for determining the optimality of a proposed field campaign. This hypothesis testing framework is demonstrated and discussed using various synthetic case studies. This study involves contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a specified location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical amount of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. The optimality of different field campaigns is assessed by computing the significance of the test resulting from each one. Evaluating the level of significance caused by a field campaign involves steps including likelihood-based inverse modeling and semi-analytical conditional particle tracking.
Gettings, S D; Lordo, R A; Hintze, K L; Bagley, D M; Casterton, P L; Chudkowski, M; Curren, R D; Demetrulias, J L; Dipasquale, L C; Earl, L K; Feder, P I; Galli, C L; Glaza, S M; Gordon, V C; Janus, J; Kurtz, P J; Marenus, K D; Moral, J; Pape, W J; Renskers, K J; Rheins, L A; Roddy, M T; Rozen, M G; Tedeschi, J P; Zyracki, J
1996-01-01
The CTFA Evaluation of Alternatives Program is an evaluation of the relationship between data from the Draize primary eye irritation test and comparable data from a selection of promising in vitro eye irritation tests. In Phase III, data from the Draize test and 41 in vitro endpoints on 25 representative surfactant-based personal care formulations were compared. As in Phase I and Phase II, regression modelling of the relationship between maximum average Draize score (MAS) and in vitro endpoint was the primary approach adopted for evaluating in vitro assay performance. The degree of confidence in prediction of MAS for a given in vitro endpoint is quantified in terms of the relative widths of prediction intervals constructed about the fitted regression curve. Prediction intervals reflect not only the error attributed to the model but also the material-specific components of variation in both the Draize and the in vitro assays. Among the in vitro assays selected for regression modeling in Phase III, the relationship between MAS and in vitro score was relatively well defined. The prediction bounds on MAS were most narrow for materials at the lower or upper end of the effective irritation range (MAS = 0-45), where variability in MAS was smallest. This, the confidence with which the MAS of surfactant-based formulations is predicted is greatest when MAS approaches zero or when MAS approaches 45 (no comment is made on prediction of MAS > 45 since extrapolation beyond the range of observed data is not possible). No single in vitro endpoint was found to exhibit relative superiority with regard to prediction of MAS. Variability associated with Draize test outcome (e.g. in MAS values) must be considered in any future comparisons of in vivo and in vitro test results if the purpose is to predict in vivo response using in vitro data.
NASA Technical Reports Server (NTRS)
Saltsman, J. F.; Halford, G. R.
1979-01-01
The method of strainrange partitioning is used to predict the cyclic lives of the Metal Properties Council's long time creep-fatigue interspersion tests of several steel alloys. Comparisons are made with predictions based upon the time- and cycle-fraction approach. The method of strainrange partitioning is shown to give consistently more accurate predictions of cyclic life than is given by the time- and cycle-fraction approach.
NASA Astrophysics Data System (ADS)
Noor, M. J. Md; Ibrahim, A.; Rahman, A. S. A.
2018-04-01
Small strain triaxial test measurement is considered to be significantly accurate compared to the external strain measurement using conventional method due to systematic errors normally associated with the test. Three submersible miniature linear variable differential transducer (LVDT) mounted on yokes which clamped directly onto the soil sample at equally 120° from the others. The device setup using 0.4 N resolution load cell and 16 bit AD converter was capable of consistently resolving displacement of less than 1µm and measuring axial strains ranging from less than 0.001% to 2.5%. Further analysis of small strain local measurement data was performed using new Normalized Multiple Yield Surface Framework (NRMYSF) method and compared with existing Rotational Multiple Yield Surface Framework (RMYSF) prediction method. The prediction of shear strength based on combined intrinsic curvilinear shear strength envelope using small strain triaxial test data confirmed the significant improvement and reliability of the measurement and analysis methods. Moreover, the NRMYSF method shows an excellent data prediction and significant improvement toward more reliable prediction of soil strength that can reduce the cost and time of experimental laboratory test.
Information Technology and Literacy Assessment.
ERIC Educational Resources Information Center
Balajthy, Ernest
2002-01-01
Compares technology predictions from around 1989 with the technology of 2002. Discusses the place of computer-based assessment today, computer-scored testing, computer-administered formal assessment, Internet-based formal assessment, computerized adaptive tests, placement tests, informal assessment, electronic portfolios, information management,…
The user's guide describes the methods used by TEST to predict toxicity and physical properties (including the new mode of action based method used to predict acute aquatic toxicity). It describes all of the experimental data sets included in the tool. It gives the prediction res...
Theory-Based University Admissions Testing for a New Millennium
ERIC Educational Resources Information Center
Sternberg, Robert J.
2004-01-01
This article describes two projects based on Robert J. Sternberg's theory of successful intelligence and designed to provide theory-based testing for university admissions. The first, Rainbow Project, provided a supplementary test of analytical, practical, and creative skills to augment the SAT in predicting college performance. The Rainbow…
Predictive genetic testing for complex diseases: a public health perspective
Marzuillo, C.; De Vito, C.; D’Andrea, E.; Rosso, A.
2014-01-01
From a public health perspective, systematic, evidence-based technology assessments and economic evaluations are needed to guide the incorporation of genomics into clinical and public health practice. However, scientific evidence on the effectiveness of predictive genetic tests is difficult to obtain. This review first highlights the similarities and differences between traditional screening tests and predictive genetic testing for complex diseases and goes on to describe frameworks for the evaluation of genetic testing that have been developed in recent years providing some evidence that currently genetic tests are not used in an appropriate way. Nevertheless, evidence-based recommendations are already available for some genomic applications that can reduce morbidity and mortality and many more are expected to emerge over the next decade. The time is now ripe for the introduction of a range of genetic tests into healthcare practice, but this will require the development of specific health policies, proper public health evaluations, organizational changes within the healthcare systems, capacity building among the healthcare workforce and the education of the public. PMID:24049051
NASA Astrophysics Data System (ADS)
Maizir, H.; Suryanita, R.
2018-01-01
A few decades, many methods have been developed to predict and evaluate the bearing capacity of driven piles. The problem of the predicting and assessing the bearing capacity of the pile is very complicated and not yet established, different soil testing and evaluation produce a widely different solution. However, the most important thing is to determine methods used to predict and evaluate the bearing capacity of the pile to the required degree of accuracy and consistency value. Accurate prediction and evaluation of axial bearing capacity depend on some variables, such as the type of soil, diameter, and length of pile, etc. The aims of the study of Artificial Neural Networks (ANNs) are utilized to obtain more accurate and consistent axial bearing capacity of a driven pile. ANNs can be described as mapping an input to the target output data. The method using the ANN model developed to predict and evaluate the axial bearing capacity of the pile based on the pile driving analyzer (PDA) test data for more than 200 selected data. The results of the predictions obtained by the ANN model and the PDA test were then compared. This research as the neural network models give a right prediction and evaluation of the axial bearing capacity of piles using neural networks.
Chan, Siew Foong; Deeks, Jonathan J; Macaskill, Petra; Irwig, Les
2008-01-01
To compare three predictive models based on logistic regression to estimate adjusted likelihood ratios allowing for interdependency between diagnostic variables (tests). This study was a review of the theoretical basis, assumptions, and limitations of published models; and a statistical extension of methods and application to a case study of the diagnosis of obstructive airways disease based on history and clinical examination. Albert's method includes an offset term to estimate an adjusted likelihood ratio for combinations of tests. Spiegelhalter and Knill-Jones method uses the unadjusted likelihood ratio for each test as a predictor and computes shrinkage factors to allow for interdependence. Knottnerus' method differs from the other methods because it requires sequencing of tests, which limits its application to situations where there are few tests and substantial data. Although parameter estimates differed between the models, predicted "posttest" probabilities were generally similar. Construction of predictive models using logistic regression is preferred to the independence Bayes' approach when it is important to adjust for dependency of tests errors. Methods to estimate adjusted likelihood ratios from predictive models should be considered in preference to a standard logistic regression model to facilitate ease of interpretation and application. Albert's method provides the most straightforward approach.
Recent Achievements of the Collaboratory for the Study of Earthquake Predictability
NASA Astrophysics Data System (ADS)
Jordan, T. H.; Liukis, M.; Werner, M. J.; Schorlemmer, D.; Yu, J.; Maechling, P. J.; Jackson, D. D.; Rhoades, D. A.; Zechar, J. D.; Marzocchi, W.
2016-12-01
The Collaboratory for the Study of Earthquake Predictability (CSEP) supports a global program to conduct prospective earthquake forecasting experiments. CSEP testing centers are now operational in California, New Zealand, Japan, China, and Europe with 442 models under evaluation. The California testing center, started by SCEC, Sept 1, 2007, currently hosts 30-minute, 1-day, 3-month, 1-year and 5-year forecasts, both alarm-based and probabilistic, for California, the Western Pacific, and worldwide. Our tests are now based on the hypocentral locations and magnitudes of cataloged earthquakes, but we plan to test focal mechanisms, seismic hazard models, ground motion forecasts, and finite rupture forecasts as well. We have increased computational efficiency for high-resolution global experiments, such as the evaluation of the Global Earthquake Activity Rate (GEAR) model, introduced Bayesian ensemble models, and implemented support for non-Poissonian simulation-based forecasts models. We are currently developing formats and procedures to evaluate externally hosted forecasts and predictions. CSEP supports the USGS program in operational earthquake forecasting and a DHS project to register and test external forecast procedures from experts outside seismology. We found that earthquakes as small as magnitude 2.5 provide important information on subsequent earthquakes larger than magnitude 5. A retrospective experiment for the 2010-2012 Canterbury earthquake sequence showed that some physics-based and hybrid models outperform catalog-based (e.g., ETAS) models. This experiment also demonstrates the ability of the CSEP infrastructure to support retrospective forecast testing. Current CSEP development activities include adoption of the Comprehensive Earthquake Catalog (ComCat) as an authorized data source, retrospective testing of simulation-based forecasts, and support for additive ensemble methods. We describe the open-source CSEP software that is available to researchers as they develop their forecast models. We also discuss how CSEP procedures are being adapted to intensity and ground motion prediction experiments as well as hazard model testing.
Kopp-Schneider, Annette; Prieto, Pilar; Kinsner-Ovaskainen, Agnieszka; Stanzel, Sven
2013-06-01
In the framework of toxicology, a testing strategy can be viewed as a series of steps which are taken to come to a final prediction about a characteristic of a compound under study. The testing strategy is performed as a single-step procedure, usually called a test battery, using simultaneously all information collected on different endpoints, or as tiered approach in which a decision tree is followed. Design of a testing strategy involves statistical considerations, such as the development of a statistical prediction model. During the EU FP6 ACuteTox project, several prediction models were proposed on the basis of statistical classification algorithms which we illustrate here. The final choice of testing strategies was not based on statistical considerations alone. However, without thorough statistical evaluations a testing strategy cannot be identified. We present here a number of observations made from the statistical viewpoint which relate to the development of testing strategies. The points we make were derived from problems we had to deal with during the evaluation of this large research project. A central issue during the development of a prediction model is the danger of overfitting. Procedures are presented to deal with this challenge. Copyright © 2012 Elsevier Ltd. All rights reserved.
Computational Pollutant Environment Assessment from Propulsion-System Testing
NASA Technical Reports Server (NTRS)
Wang, Ten-See; McConnaughey, Paul; Chen, Yen-Sen; Warsi, Saif
1996-01-01
An asymptotic plume growth method based on a time-accurate three-dimensional computational fluid dynamics formulation has been developed to assess the exhaust-plume pollutant environment from a simulated RD-170 engine hot-fire test on the F1 Test Stand at Marshall Space Flight Center. Researchers have long known that rocket-engine hot firing has the potential for forming thermal nitric oxides, as well as producing carbon monoxide when hydrocarbon fuels are used. Because of the complex physics involved, most attempts to predict the pollutant emissions from ground-based engine testing have used simplified methods, which may grossly underpredict and/or overpredict the pollutant formations in a test environment. The objective of this work has been to develop a computational fluid dynamics-based methodology that replicates the underlying test-stand flow physics to accurately and efficiently assess pollutant emissions from ground-based rocket-engine testing. A nominal RD-170 engine hot-fire test was computed, and pertinent test-stand flow physics was captured. The predicted total emission rates compared reasonably well with those of the existing hydrocarbon engine hot-firing test data.
Dallmann, André; Ince, Ibrahim; Coboeken, Katrin; Eissing, Thomas; Hempel, Georg
2017-09-18
Physiologically based pharmacokinetic modeling is considered a valuable tool for predicting pharmacokinetic changes in pregnancy to subsequently guide in-vivo pharmacokinetic trials in pregnant women. The objective of this study was to extend and verify a previously developed physiologically based pharmacokinetic model for pregnant women for the prediction of pharmacokinetics of drugs metabolized via several cytochrome P450 enzymes. Quantitative information on gestation-specific changes in enzyme activity available in the literature was incorporated in a pregnancy physiologically based pharmacokinetic model and the pharmacokinetics of eight drugs metabolized via one or multiple cytochrome P450 enzymes was predicted. The tested drugs were caffeine, midazolam, nifedipine, metoprolol, ondansetron, granisetron, diazepam, and metronidazole. Pharmacokinetic predictions were evaluated by comparison with in-vivo pharmacokinetic data obtained from the literature. The pregnancy physiologically based pharmacokinetic model successfully predicted the pharmacokinetics of all tested drugs. The observed pregnancy-induced pharmacokinetic changes were qualitatively and quantitatively reasonably well predicted for all drugs. Ninety-seven percent of the mean plasma concentrations predicted in pregnant women fell within a twofold error range and 63% within a 1.25-fold error range. For all drugs, the predicted area under the concentration-time curve was within a 1.25-fold error range. The presented pregnancy physiologically based pharmacokinetic model can quantitatively predict the pharmacokinetics of drugs that are metabolized via one or multiple cytochrome P450 enzymes by integrating prior knowledge of the pregnancy-related effect on these enzymes. This pregnancy physiologically based pharmacokinetic model may thus be used to identify potential exposure changes in pregnant women a priori and to eventually support informed decision making when clinical trials are designed in this special population.
Parrish, Rudolph S.; Smith, Charles N.
1990-01-01
A quantitative method is described for testing whether model predictions fall within a specified factor of true values. The technique is based on classical theory for confidence regions on unknown population parameters and can be related to hypothesis testing in both univariate and multivariate situations. A capability index is defined that can be used as a measure of predictive capability of a model, and its properties are discussed. The testing approach and the capability index should facilitate model validation efforts and permit comparisons among competing models. An example is given for a pesticide leaching model that predicts chemical concentrations in the soil profile.
Testing and analysis of internal hardwood log defect prediction models
R. Edward Thomas
2011-01-01
The severity and location of internal defects determine the quality and value of lumber sawn from hardwood logs. Models have been developed to predict the size and position of internal defects based on external defect indicator measurements. These models were shown to predict approximately 80% of all internal knots based on external knot indicators. However, the size...
Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction
Venkatesan, R.
2016-01-01
Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free) software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN) and the novel adaptive dimensional biogeography based optimization (ADBBO) model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets. PMID:27738649
Cost-Sensitive Radial Basis Function Neural Network Classifier for Software Defect Prediction.
Kumudha, P; Venkatesan, R
Effective prediction of software modules, those that are prone to defects, will enable software developers to achieve efficient allocation of resources and to concentrate on quality assurance activities. The process of software development life cycle basically includes design, analysis, implementation, testing, and release phases. Generally, software testing is a critical task in the software development process wherein it is to save time and budget by detecting defects at the earliest and deliver a product without defects to the customers. This testing phase should be carefully operated in an effective manner to release a defect-free (bug-free) software product to the customers. In order to improve the software testing process, fault prediction methods identify the software parts that are more noted to be defect-prone. This paper proposes a prediction approach based on conventional radial basis function neural network (RBFNN) and the novel adaptive dimensional biogeography based optimization (ADBBO) model. The developed ADBBO based RBFNN model is tested with five publicly available datasets from the NASA data program repository. The computed results prove the effectiveness of the proposed ADBBO-RBFNN classifier approach with respect to the considered metrics in comparison with that of the early predictors available in the literature for the same datasets.
NASA Technical Reports Server (NTRS)
Neustein, Joseph; Schafer, Louis J , Jr
1946-01-01
Several methods of predicting the compressible-flow pressure loss across a baffled aircraft-engine cylinder were analytically related and were experimentally investigated on a typical air-cooled aircraft-engine cylinder. Tests with and without heat transfer covered a wide range of cooling-air flows and simulated altitudes from sea level to 40,000 feet. Both the analysis and the test results showed that the method based on the density determined by the static pressure and the stagnation temperature at the baffle exit gave results comparable with those obtained from methods derived by one-dimensional-flow theory. The method based on a characteristic Mach number, although related analytically to one-dimensional-flow theory, was found impractical in the present tests because of the difficulty encountered in defining the proper characteristic state of the cooling air. Accurate predictions of altitude pressure loss can apparently be made by these methods, provided that they are based on the results of sea-level tests with heat transfer.
High Temperature Burst Testing of a Superalloy Disk With a Dual Grain Structure
NASA Technical Reports Server (NTRS)
Gayda, J.; Kantzos, P.
2004-01-01
Elevated temperature burst testing of a disk with a dual grain structure made from an advanced nickel-base superalloy, LSHR, was conducted. The disk had a fine grain bore and coarse grain rim, produced using NASA's low cost DMHT technology. The results of the spin testing showed the disk burst at 42 530 rpm in line with predictions based on a 2-D finite element analysis. Further, significant growth of the disk was observed before failure which was also in line with predictions.
Failure Pressure and Leak Rate of Steam Generator Tubes With Stress Corrosion Cracks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majumdar, S.; Kasza, K.; Park, J.Y.
2002-07-01
This paper illustrates the use of an 'equivalent rectangular crack' approach to predict leak rates through laboratory generated stress corrosion cracks. A comparison between predicted and observed test data on rupture and leak rate from laboratory generated stress corrosion cracks are provided. Specimen flaws were sized by post-test fractography in addition to pre-test advanced eddy current technique. The test failure pressures and leak rates are shown to be closer to those predicted on the basis of fractography than on NDE. However, the predictions based on NDE results are encouraging, particularly because they have the potential to determine a more detailedmore » geometry of ligamentous cracks from which more accurate predictions of failure pressure and leak rate can be made in the future. (authors)« less
ERIC Educational Resources Information Center
Immekus, Jason C.; Atitya, Ben
2016-01-01
Interim tests are a central component of district-wide assessment systems, yet their technical quality to guide decisions (e.g., instructional) has been repeatedly questioned. In response, the study purpose was to investigate the validity of a series of English Language Arts (ELA) interim assessments in terms of dimensionality and prediction of…
Wijdenes-Pijl, Miranda; Dondorp, Wybo J; Timmermans, Danielle Rm; Cornel, Martina C; Henneman, Lidewij
2011-07-05
This study assessed lay perceptions of issues related to predictive genetic testing for multifactorial diseases. These perceived issues may differ from the "classic" issues, e.g. autonomy, discrimination, and psychological harm that are considered important in predictive testing for monogenic disorders. In this study, type 2 diabetes was used as an example, and perceptions with regard to predictive testing based on DNA test results and family history assessment were compared. Eight focus group interviews were held with 45 individuals aged 35-70 years with (n = 3) and without (n = 1) a family history of diabetes, mixed groups of these two (n = 2), and diabetes patients (n = 2). All interviews were transcribed and analysed using Atlas-ti. Most participants believed in the ability of a predictive test to identify people at risk for diabetes and to motivate preventive behaviour. Different reasons underlying motivation were considered when comparing DNA test results and a family history risk assessment. A perceived drawback of DNA testing was that diabetes was considered not severe enough for this type of risk assessment. In addition, diabetes family history assessment was not considered useful by some participants, since there are also other risk factors involved, not everyone has a diabetes family history or knows their family history, and it might have a negative influence on family relations. Respect for autonomy of individuals was emphasized more with regard to DNA testing than family history assessment. Other issues such as psychological harm, discrimination, and privacy were only briefly mentioned for both tests. The results suggest that most participants believe a predictive genetic test could be used in the prevention of multifactorial disorders, such as diabetes, but indicate points to consider before both these tests are applied. These considerations differ with regard to the method of assessment (DNA test or obtaining family history) and also differ from monogenic disorders.
Sun, Rongrong; Wang, Yuanyuan
2008-11-01
Predicting the spontaneous termination of the atrial fibrillation (AF) leads to not only better understanding of mechanisms of the arrhythmia but also the improved treatment of the sustained AF. A novel method is proposed to characterize the AF based on structure and the quantification of the recurrence plot (RP) to predict the termination of the AF. The RP of the electrocardiogram (ECG) signal is firstly obtained and eleven features are extracted to characterize its three basic patterns. Then the sequential forward search (SFS) algorithm and Davies-Bouldin criterion are utilized to select the feature subset which can predict the AF termination effectively. Finally, the multilayer perceptron (MLP) neural network is applied to predict the AF termination. An AF database which includes one training set and two testing sets (A and B) of Holter ECG recordings is studied. Experiment results show that 97% of testing set A and 95% of testing set B are correctly classified. It demonstrates that this algorithm has the ability to predict the spontaneous termination of the AF effectively.
ZY3-02 Laser Altimeter Footprint Geolocation Prediction
Xie, Junfeng; Tang, Xinming; Mo, Fan; Li, Guoyuan; Zhu, Guangbin; Wang, Zhenming; Fu, Xingke; Gao, Xiaoming; Dou, Xianhui
2017-01-01
Successfully launched on 30 May 2016, ZY3-02 is the first Chinese surveying and mapping satellite equipped with a lightweight laser altimeter. Calibration is necessary before the laser altimeter becomes operational. Laser footprint location prediction is the first step in calibration that is based on ground infrared detectors, and it is difficult because the sample frequency of the ZY3-02 laser altimeter is 2 Hz, and the distance between two adjacent laser footprints is about 3.5 km. In this paper, we build an on-orbit rigorous geometric prediction model referenced to the rigorous geometric model of optical remote sensing satellites. The model includes three kinds of data that must be predicted: pointing angle, orbit parameters, and attitude angles. The proposed method is verified by a ZY3-02 laser altimeter on-orbit geometric calibration test. Five laser footprint prediction experiments are conducted based on the model, and the laser footprint prediction accuracy is better than 150 m on the ground. The effectiveness and accuracy of the on-orbit rigorous geometric prediction model are confirmed by the test results. The geolocation is predicted precisely by the proposed method, and this will give a reference to the geolocation prediction of future land laser detectors in other laser altimeter calibration test. PMID:28934160
ZY3-02 Laser Altimeter Footprint Geolocation Prediction.
Xie, Junfeng; Tang, Xinming; Mo, Fan; Li, Guoyuan; Zhu, Guangbin; Wang, Zhenming; Fu, Xingke; Gao, Xiaoming; Dou, Xianhui
2017-09-21
Successfully launched on 30 May 2016, ZY3-02 is the first Chinese surveying and mapping satellite equipped with a lightweight laser altimeter. Calibration is necessary before the laser altimeter becomes operational. Laser footprint location prediction is the first step in calibration that is based on ground infrared detectors, and it is difficult because the sample frequency of the ZY3-02 laser altimeter is 2 Hz, and the distance between two adjacent laser footprints is about 3.5 km. In this paper, we build an on-orbit rigorous geometric prediction model referenced to the rigorous geometric model of optical remote sensing satellites. The model includes three kinds of data that must be predicted: pointing angle, orbit parameters, and attitude angles. The proposed method is verified by a ZY3-02 laser altimeter on-orbit geometric calibration test. Five laser footprint prediction experiments are conducted based on the model, and the laser footprint prediction accuracy is better than 150 m on the ground. The effectiveness and accuracy of the on-orbit rigorous geometric prediction model are confirmed by the test results. The geolocation is predicted precisely by the proposed method, and this will give a reference to the geolocation prediction of future land laser detectors in other laser altimeter calibration test.
Grinberg, A; Lopez-Villalobos, N; Lawrence, K; Nulsen, M
2005-10-01
To gauge how well prior laboratory test results predict in vitro penicillin resistance of Staphylococcus aureus isolates from dairy cows with mastitis. Population-based data on the farm of origin (n=79), genotype based on pulsed-field gel electrophoresis (PFGE) results, and the penicillin-resistance status of Staph. aureus isolates (n=115) from milk samples collected from dairy cows with mastitis submitted to two diagnostic laboratories over a 6-month period were used. Data were mined stochastically using the all-possible-pairs method, binomial modelling and bootstrap simulation, to test whether prior test results enhance the accuracy of prediction of penicillin resistance on farms. Of all Staph. aureus isolates tested, 38% were penicillin resistant. A significant aggregation of penicillin-resistance status was evident within farms. The probability of random pairs of isolates from the same farm having the same penicillin-resistance status was 76%, compared with 53% for random pairings of samples across all farms. Thus, the resistance status of randomly selected isolates was 1.43 times more likely to correctly predict the status of other isolates from the same farm than the random population pairwise concordance probability (p=0.011). This effect was likely due to the clonal relationship of isolates within farms, as the predictive fraction attributable to prior test results was close to nil when the effect of within-farm clonal infections was withdrawn from the model. Knowledge of the penicillin-resistance status of a prior Staph. aureus isolate significantly enhanced the predictive capability of other isolates from the same farm. In the time and space frame of this study, clinicians using previous information from a farm would have more accurately predicted the penicillin-resistance status of an isolate than they would by chance alone on farms infected with clonal Staph. aureus isolates, but not on farms infected with highly genetically heterogeneous bacterial strains.
ERIC Educational Resources Information Center
Zeidner, Moshe
1987-01-01
This study examined the cross-cultural validity of the sex bias contention with respect to standardized aptitude testing, used for academic prediction purposes in Israel. Analyses were based on the grade point average and scores of 1778 Jewish and 1017 Arab students who were administered standardized college entrance test batteries. (Author/LMO)
HIV-1 protease cleavage site prediction based on two-stage feature selection method.
Niu, Bing; Yuan, Xiao-Cheng; Roeper, Preston; Su, Qiang; Peng, Chun-Rong; Yin, Jing-Yuan; Ding, Juan; Li, HaiPeng; Lu, Wen-Cong
2013-03-01
Knowledge of the mechanism of HIV protease cleavage specificity is critical to the design of specific and effective HIV inhibitors. Searching for an accurate, robust, and rapid method to correctly predict the cleavage sites in proteins is crucial when searching for possible HIV inhibitors. In this article, HIV-1 protease specificity was studied using the correlation-based feature subset (CfsSubset) selection method combined with Genetic Algorithms method. Thirty important biochemical features were found based on a jackknife test from the original data set containing 4,248 features. By using the AdaBoost method with the thirty selected features the prediction model yields an accuracy of 96.7% for the jackknife test and 92.1% for an independent set test, with increased accuracy over the original dataset by 6.7% and 77.4%, respectively. Our feature selection scheme could be a useful technique for finding effective competitive inhibitors of HIV protease.
Does rational selection of training and test sets improve the outcome of QSAR modeling?
Martin, Todd M; Harten, Paul; Young, Douglas M; Muratov, Eugene N; Golbraikh, Alexander; Zhu, Hao; Tropsha, Alexander
2012-10-22
Prior to using a quantitative structure activity relationship (QSAR) model for external predictions, its predictive power should be established and validated. In the absence of a true external data set, the best way to validate the predictive ability of a model is to perform its statistical external validation. In statistical external validation, the overall data set is divided into training and test sets. Commonly, this splitting is performed using random division. Rational splitting methods can divide data sets into training and test sets in an intelligent fashion. The purpose of this study was to determine whether rational division methods lead to more predictive models compared to random division. A special data splitting procedure was used to facilitate the comparison between random and rational division methods. For each toxicity end point, the overall data set was divided into a modeling set (80% of the overall set) and an external evaluation set (20% of the overall set) using random division. The modeling set was then subdivided into a training set (80% of the modeling set) and a test set (20% of the modeling set) using rational division methods and by using random division. The Kennard-Stone, minimal test set dissimilarity, and sphere exclusion algorithms were used as the rational division methods. The hierarchical clustering, random forest, and k-nearest neighbor (kNN) methods were used to develop QSAR models based on the training sets. For kNN QSAR, multiple training and test sets were generated, and multiple QSAR models were built. The results of this study indicate that models based on rational division methods generate better statistical results for the test sets than models based on random division, but the predictive power of both types of models are comparable.
Dai, Zi-Ru; Ai, Chun-Zhi; Ge, Guang-Bo; He, Yu-Qi; Wu, Jing-Jing; Wang, Jia-Yue; Man, Hui-Zi; Jia, Yan; Yang, Ling
2015-06-30
Early prediction of xenobiotic metabolism is essential for drug discovery and development. As the most important human drug-metabolizing enzyme, cytochrome P450 3A4 has a large active cavity and metabolizes a broad spectrum of substrates. The poor substrate specificity of CYP3A4 makes it a huge challenge to predict the metabolic site(s) on its substrates. This study aimed to develop a mechanism-based prediction model based on two key parameters, including the binding conformation and the reaction activity of ligands, which could reveal the process of real metabolic reaction(s) and the site(s) of modification. The newly established model was applied to predict the metabolic site(s) of steroids; a class of CYP3A4-preferred substrates. 38 steroids and 12 non-steroids were randomly divided into training and test sets. Two major metabolic reactions, including aliphatic hydroxylation and N-dealkylation, were involved in this study. At least one of the top three predicted metabolic sites was validated by the experimental data. The overall accuracy for the training and test were 82.14% and 86.36%, respectively. In summary, a mechanism-based prediction model was established for the first time, which could be used to predict the metabolic site(s) of CYP3A4 on steroids with high predictive accuracy.
Bakal, Gokhan; Talari, Preetham; Kakani, Elijah V; Kavuluru, Ramakanth
2018-06-01
Identifying new potential treatment options for medical conditions that cause human disease burden is a central task of biomedical research. Since all candidate drugs cannot be tested with animal and clinical trials, in vitro approaches are first attempted to identify promising candidates. Likewise, identifying different causal relations between biomedical entities is also critical to understand biomedical processes. Generally, natural language processing (NLP) and machine learning are used to predict specific relations between any given pair of entities using the distant supervision approach. To build high accuracy supervised predictive models to predict previously unknown treatment and causative relations between biomedical entities based only on semantic graph pattern features extracted from biomedical knowledge graphs. We used 7000 treats and 2918 causes hand-curated relations from the UMLS Metathesaurus to train and test our models. Our graph pattern features are extracted from simple paths connecting biomedical entities in the SemMedDB graph (based on the well-known SemMedDB database made available by the U.S. National Library of Medicine). Using these graph patterns connecting biomedical entities as features of logistic regression and decision tree models, we computed mean performance measures (precision, recall, F-score) over 100 distinct 80-20% train-test splits of the datasets. For all experiments, we used a positive:negative class imbalance of 1:10 in the test set to model relatively more realistic scenarios. Our models predict treats and causes relations with high F-scores of 99% and 90% respectively. Logistic regression model coefficients also help us identify highly discriminative patterns that have an intuitive interpretation. We are also able to predict some new plausible relations based on false positives that our models scored highly based on our collaborations with two physician co-authors. Finally, our decision tree models are able to retrieve over 50% of treatment relations from a recently created external dataset. We employed semantic graph patterns connecting pairs of candidate biomedical entities in a knowledge graph as features to predict treatment/causative relations between them. We provide what we believe is the first evidence in direct prediction of biomedical relations based on graph features. Our work complements lexical pattern based approaches in that the graph patterns can be used as additional features for weakly supervised relation prediction. Copyright © 2018 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magome, T; Haga, A; Igaki, H
Purpose: Although many outcome prediction models based on dose-volume information have been proposed, it is well known that the prognosis may be affected also by multiple clinical factors. The purpose of this study is to predict the survival time after radiotherapy for high-grade glioma patients based on features including clinical and dose-volume histogram (DVH) information. Methods: A total of 35 patients with high-grade glioma (oligodendroglioma: 2, anaplastic astrocytoma: 3, glioblastoma: 30) were selected in this study. All patients were treated with prescribed dose of 30–80 Gy after surgical resection or biopsy from 2006 to 2013 at The University of Tokyomore » Hospital. All cases were randomly separated into training dataset (30 cases) and test dataset (5 cases). The survival time after radiotherapy was predicted based on a multiple linear regression analysis and artificial neural network (ANN) by using 204 candidate features. The candidate features included the 12 clinical features (tumor location, extent of surgical resection, treatment duration of radiotherapy, etc.), and the 192 DVH features (maximum dose, minimum dose, D95, V60, etc.). The effective features for the prediction were selected according to a step-wise method by using 30 training cases. The prediction accuracy was evaluated by a coefficient of determination (R{sup 2}) between the predicted and actual survival time for the training and test dataset. Results: In the multiple regression analysis, the value of R{sup 2} between the predicted and actual survival time was 0.460 for the training dataset and 0.375 for the test dataset. On the other hand, in the ANN analysis, the value of R{sup 2} was 0.806 for the training dataset and 0.811 for the test dataset. Conclusion: Although a large number of patients would be needed for more accurate and robust prediction, our preliminary Result showed the potential to predict the outcome in the patients with high-grade glioma. This work was partly supported by the JSPS Core-to-Core Program(No. 23003) and Grant-in-aid from the JSPS Fellows.« less
Delta Clipper-Experimental In-Ground Effect on Base-Heating Environment
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1998-01-01
A quasitransient in-ground effect method is developed to study the effect of vertical landing on a launch vehicle base-heating environment. This computational methodology is based on a three-dimensional, pressure-based, viscous flow, chemically reacting, computational fluid dynamics formulation. Important in-ground base-flow physics such as the fountain-jet formation, plume growth, air entrainment, and plume afterburning are captured with the present methodology. Convective and radiative base-heat fluxes are computed for comparison with those of a flight test. The influence of the laminar Prandtl number on the convective heat flux is included in this study. A radiative direction-dependency test is conducted using both the discrete ordinate and finite volume methods. Treatment of the plume afterburning is found to be very important for accurate prediction of the base-heat fluxes. Convective and radiative base-heat fluxes predicted by the model using a finite rate chemistry option compared reasonably well with flight-test data.
Phanphet, Suwattanarwong; Dechjarern, Surangsee; Jomjanyong, Sermkiat
2017-05-01
The main objective of this work is to improve the standard of the existing design of knee prosthesis developed by Thailand's Prostheses Foundation of Her Royal Highness The Princess Mother. The experimental structural tests, based on the ISO 10328, of the existing design showed that a few components failed due to fatigue under normal cyclic loading below the required number of cycles. The finite element (FE) simulations of structural tests on the knee prosthesis were carried out. Fatigue life predictions of knee component materials were modeled based on the Morrow's approach. The fatigue life prediction based on the FE model result was validated with the corresponding structural test and the results agreed well. The new designs of the failed components were studied using the design of experimental approach and finite element analysis of the ISO 10328 structural test of knee prostheses under two separated loading cases. Under ultimate loading, knee prosthesis peak von Mises stress must be less than the yield strength of knee component's material and the total knee deflection must be lower than 2.5mm. The fatigue life prediction of all knee components must be higher than 3,000,000 cycles under normal cyclic loading. The design parameters are the thickness of joint bars, the diameter of lower connector and the thickness of absorber-stopper. The optimized knee prosthesis design meeting all the requirements was recommended. Experimental ISO 10328 structural test of the fabricated knee prosthesis based on the optimized design confirmed the finite element prediction. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
The prospect of predictive testing for personal risk: attitudes and decision making.
Wroe, A L; Salkovskis, P M; Rimes, K A
1998-06-01
As predictive tests for medical problems such as genetic disorders become more widely available, it becomes increasingly important to understand the processes involved in the decision whether or not to seek testing. This study investigates the decision to pursue the possibility of testing. Individuals (one group who had already contemplated the possibility of predictive testing and one group who had not) were asked to consider predictive testing for several diseases. They rated the likelihood of opting for testing and specified the reasons which they believed had affected their decision. The ratio of the numbers of reasons stated for testing and the numbers of reasons stated against testing was a good predictor of the stated likelihood of testing, particularly when the reasons were weighted by utility (importance). Those who had previously contemplated testing specified more emotional reasons. It is proposed that the decision process is internally logical although it may seem illogical to others due to there being idiosyncratic premises (or reasons) upon which the decision is based. It is concluded that the Utility Theory is a useful basis for describing how people make decisions related to predictive testing; modifications of the theory are proposed.
Predictive testing to characterize substances for their skin sensitization potential has historically been based on animal models such as the Local Lymph Node Assay (LLNA) and the Guinea Pig Maximization Test (GPMT). In recent years, EU regulations have provided a strong incentiv...
NIR spectroscopic measurement of moisture content in Scots pine seeds.
Lestander, Torbjörn A; Geladi, Paul
2003-04-01
When tree seeds are used for seedling production it is important that they are of high quality in order to be viable. One of the factors influencing viability is moisture content and an ideal quality control system should be able to measure this factor quickly for each seed. Seed moisture content within the range 3-34% was determined by near-infrared (NIR) spectroscopy on Scots pine (Pinus sylvestris L.) single seeds and on bulk seed samples consisting of 40-50 seeds. The models for predicting water content from the spectra were made by partial least squares (PLS) and ordinary least squares (OLS) regression. Different conditions were simulated involving both using less wavelengths and going from samples to single seeds. Reflectance and transmission measurements were used. Different spectral pretreatment methods were tested on the spectra. Including bias, the lowest prediction errors for PLS models based on reflectance within 780-2280 nm from bulk samples and single seeds were 0.8% and 1.9%, respectively. Reduction of the single seed reflectance spectrum to 850-1048 nm gave higher biases and prediction errors in the test set. In transmission (850-1048 nm) the prediction error was 2.7% for single seeds. OLS models based on simulated 4-sensor single seed system consisting of optical filters with Gaussian transmission indicated more than 3.4% error in prediction. A practical F-test based on test sets to differentiate models is introduced.
Predicting Performance in Higher Education Using Proximal Predictors.
Niessen, A Susan M; Meijer, Rob R; Tendeiro, Jorge N
2016-01-01
We studied the validity of two methods for predicting academic performance and student-program fit that were proximal to important study criteria. Applicants to an undergraduate psychology program participated in a selection procedure containing a trial-studying test based on a work sample approach, and specific skills tests in English and math. Test scores were used to predict academic achievement and progress after the first year, achievement in specific course types, enrollment, and dropout after the first year. All tests showed positive significant correlations with the criteria. The trial-studying test was consistently the best predictor in the admission procedure. We found no significant differences between the predictive validity of the trial-studying test and prior educational performance, and substantial shared explained variance between the two predictors. Only applicants with lower trial-studying scores were significantly less likely to enroll in the program. In conclusion, the trial-studying test yielded predictive validities similar to that of prior educational performance and possibly enabled self-selection. In admissions aimed at student-program fit, or in admissions in which past educational performance is difficult to use, a trial-studying test is a good instrument to predict academic performance.
Tian, Wenliang; Meng, Fandi; Liu, Li; Li, Ying; Wang, Fuhui
2017-01-01
A concept for prediction of organic coatings, based on the alternating hydrostatic pressure (AHP) accelerated tests, has been presented. An AHP accelerated test with different pressure values has been employed to evaluate coating degradation. And a back-propagation artificial neural network (BP-ANN) has been established to predict the service property and the service lifetime of coatings. The pressure value (P), immersion time (t) and service property (impedance modulus |Z|) are utilized as the parameters of the network. The average accuracies of the predicted service property and immersion time by the established network are 98.6% and 84.8%, respectively. The combination of accelerated test and prediction method by BP-ANN is promising to evaluate and predict coating property used in deep sea. PMID:28094340
Establishment of a cell-based wound healing assay for bio-relevant testing of wound therapeutics.
Planz, Viktoria; Wang, Jing; Windbergs, Maike
Predictive in vitro testing of novel wound therapeutics requires adequate cell-based bio-assays. Such assays represent an integral part during preclinical development as pre-step before entering in vivo studies. Simple "scratch tests" based on defected skin cell monolayers exist, however these can solely be used for testing liquids, as cell monolayer destruction and excessive hydration limit their applicability for (semi-)solid systems like wound dressings. In this context, a cell-based wound healing assay is introduced for rapid and predictive testing of wound therapeutics independent of their physical state in a bio-relevant environment. A novel wound healing assay was established for bio-relevant and predictive testing of (semi-) solid wound therapeutics. The assay allows for physiologically relevant hydration of the tested wound therapeutics at the air-liquid interface and their removal without cell monolayer disruption. In a proof-of-concept study, the applicability and discriminative power could be demonstrated by examining unloaded and drug-loaded wound dressings with two different established wound healing actives (dexpanthenol and metyrapone) and their effect on skin cell behavior. The influence of the released drug on the cells´ healing behavior could successfully be monitored over time. Wound size assessment after 96h resulted in an eight fold smaller wound area for drug treated models compared to the ones treated with unloaded fibers and non-treated wounds. This assay provides valuable first insights towards the establishment of a valid screening and evaluation tool for preclinical wound therapeutic development from liquid to (semi-)solid systems to improve predictability in a simple, yet standardized way. Copyright © 2017 Elsevier Inc. All rights reserved.
Effects of surface chemistry on hot corrosion life
NASA Technical Reports Server (NTRS)
Fryxell, R. E.; Leese, G. E.
1985-01-01
This program has its primary objective: the development of hot corrosion life prediction methodology based on a combination of laboratory test data and evaluation of field service turbine components which show evidence of hot corrosion. The laboratory program comprises burner rig testing by TRW. A summary of results is given for two series of burner rig tests. The life prediction methodology parameters to be appraised in a final campaign of burner rig tests are outlined.
Punnen, Sanoj; Freedland, Stephen J; Polascik, Thomas J; Loeb, Stacy; Risk, Michael C; Savage, Stephen; Mathur, Sharad C; Uchio, Edward; Dong, Yan; Silberstein, Jonathan L
2018-06-01
The 4Kscore® test accurately detects aggressive prostate cancer and reduces unnecessary biopsies. However, its performance in African American men has been unknown. We assessed test performance in a cohort of men with a large African American representation. Men referred for prostate biopsy at 8 Veterans Affairs medical centers were prospectively enrolled in the study. All men underwent phlebotomy for 4Kscore test assessment prior to prostate biopsy. The primary outcome was the detection of Grade Group 2 or higher cancer on biopsy. We assessed the discrimination, calibration and clinical usefulness of 4Kscore to predict Grade Group 2 or higher prostate cancer and compared it to a base model consisting of age, digital rectal examination and prostate specific antigen. Additionally, we compared test performance in African American and nonAfrican American men. Of the 366 enrolled men 205 (56%) were African American and 131 (36%) had Grade Group 2 or higher prostate cancer. The 4Kscore test showed better discrimination (AUC 0.81 vs 0.74, p <0.01) and higher clinical usefulness on decision curve analysis than the base model. Test prediction closely approximated the observed risk of Grade Group 2 or higher prostate cancer. There was no difference in test performance in African American and nonAfrican American men (0.80 vs 0.84, p = 0.32), The test outperformed the base model in each group. The 4Kscore test accurately predicts aggressive prostate cancer for biopsy decision making in African American and nonAfrican American men. Copyright © 2018 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Improve SSME power balance model
NASA Technical Reports Server (NTRS)
Karr, Gerald R.
1992-01-01
Effort was dedicated to development and testing of a formal strategy for reconciling uncertain test data with physically limited computational prediction. Specific weaknesses in the logical structure of the current Power Balance Model (PBM) version are described with emphasis given to the main routing subroutines BAL and DATRED. Selected results from a variational analysis of PBM predictions are compared to Technology Test Bed (TTB) variational study results to assess PBM predictive capability. The motivation for systematic integration of uncertain test data with computational predictions based on limited physical models is provided. The theoretical foundation for the reconciliation strategy developed in this effort is presented, and results of a reconciliation analysis of the Space Shuttle Main Engine (SSME) high pressure fuel side turbopump subsystem are examined.
Elemental Water Impact Test: Phase 3 Plunge Depth of a 36-Inch Aluminum Tank Head
NASA Technical Reports Server (NTRS)
Vassilakos, Gregory J.
2014-01-01
Spacecraft are being designed based on LS-DYNA water landing simulations. The Elemental Water Impact Test (EWIT) series was undertaken to assess the accuracy of LS-DYNA water impact simulations. Phase 3 featured a composite tank head that was tested at a range of heights to verify the ability to predict structural failure of composites. To support planning for Phase 3, a test series was conducted with an aluminum tank head dropped from heights of 2, 6, 10, and 12 feet to verify that the test article would not impact the bottom of the test pool. This report focuses on the comparisons of the measured plunge depths to LS-DYNA predictions. The results for the tank head model demonstrated the following. 1. LS-DYNA provides accurate predictions for peak accelerations. 2. LS-DYNA consistently under-predicts plunge depth. An allowance of at least 20% should be added to the LS-DYNA predictions. 3. The LS-DYNA predictions for plunge depth are relatively insensitive to the fluid-structure coupling stiffness.
Descent advisor preliminary field test
NASA Technical Reports Server (NTRS)
Green, Steven M.; Vivona, Robert A.; Sanford, Beverly
1995-01-01
A field test of the Descent Advisor (DA) automation tool was conducted at the Denver Air Route Traffic Control Center in September 1994. DA is being developed to assist Center controllers in the efficient management and control of arrival traffic. DA generates advisories, based on trajectory predictions, to achieve accurate meter-fix arrival times in a fuel efficient manner while assisting the controller with the prediction and resolution of potential conflicts. The test objectives were to evaluate the accuracy of DA trajectory predictions for conventional- and flight-management-system-equipped jet transports, to identify significant sources of trajectory prediction error, and to investigate procedural and training issues (both air and ground) associated with DA operations. Various commercial aircraft (97 flights total) and a Boeing 737-100 research aircraft participated in the test. Preliminary results from the primary test set of 24 commercial flights indicate a mean DA arrival time prediction error of 2.4 sec late with a standard deviation of 13.1 sec. This paper describes the field test and presents preliminary results for the commercial flights.
Effective prediction of biodiversity in tidal flat habitats using an artificial neural network.
Yoo, Jae-Won; Lee, Yong-Woo; Lee, Chang-Gun; Kim, Chang-Soo
2013-02-01
Accurate predictions of benthic macrofaunal biodiversity greatly benefit the efficient planning and management of habitat restoration efforts in tidal flat habitats. Artificial neural network (ANN) prediction models for such biodiversity were developed and tested based on 13 biophysical variables, collected from 50 sites of tidal flats along the coast of Korea during 1991-2006. The developed model showed high predictions during training, cross-validation and testing. Besides the training and testing procedures, an independent dataset from a different time period (2007-2010) was used to test the robustness and practical usage of the model. High prediction on the independent dataset (r = 0.84) validated the networks proper learning of predictive relationship and its generality. Key influential variables identified by follow-up sensitivity analyses were related with topographic dimension, environmental heterogeneity, and water column properties. Study demonstrates the successful application of ANN for the accurate prediction of benthic macrofaunal biodiversity and understanding of dynamics of candidate variables. Copyright © 2012 Elsevier Ltd. All rights reserved.
Miura, Michiaki; Nakamura, Junichi; Matsuura, Yusuke; Wako, Yasushi; Suzuki, Takane; Hagiwara, Shigeo; Orita, Sumihisa; Inage, Kazuhide; Kawarai, Yuya; Sugano, Masahiko; Nawata, Kento; Ohtori, Seiji
2017-12-16
Finite element analysis (FEA) of the proximal femur has been previously validated with large mesh size, but these were insufficient to simulate the model with small implants in recent studies. This study aimed to validate the proximal femoral computed tomography (CT)-based specimen-specific FEA model with smaller mesh size using fresh frozen cadavers. Twenty proximal femora from 10 cadavers (mean age, 87.1 years) were examined. CT was performed on all specimens with a calibration phantom. Nonlinear FEA prediction with stance configuration was performed using Mechanical Finder (mesh,1.5 mm tetrahedral elements; shell thickness, 0.2 mm; Poisson's coefficient, 0.3), in comparison with mechanical testing. Force was applied at a fixed vertical displacement rate, and the magnitude of the applied load and displacement were continuously recorded. The fracture load and stiffness were calculated from force-displacement curve, and the correlation between mechanical testing and FEA prediction was examined. A pilot study with one femur revealed that the equations proposed by Keller for vertebra were the most reproducible for calculating Young's modulus and the yield stress of elements of the proximal femur. There was a good linear correlation between fracture loads of mechanical testing and FEA prediction (R 2 = 0.6187) and between the stiffness of mechanical testing and FEA prediction (R 2 = 0.5499). There was a good linear correlation between fracture load and stiffness (R 2 = 0.6345) in mechanical testing and an excellent correlation between these (R 2 = 0.9240) in FEA prediction. CT-based specimen-specific FEA model of the proximal femur with small element size was validated using fresh frozen cadavers. The equations proposed by Keller for vertebra were found to be the most reproducible for the proximal femur in elderly people.
Early detection of Alzheimer disease: methods, markers, and misgivings.
Green, R C; Clarke, V C; Thompson, N J; Woodard, J L; Letz, R
1997-01-01
There is at present no reliable predictive test for most forms of Alzheimer disease (AD). Although some information about future risk for disease is available in theory through ApoE genotyping, it is of limited accuracy and utility. Once neuroprotective treatments are available for AD, reliable early detection will become a key component of the treatment strategy. We recently conducted a pilot survey eliciting attitudes and beliefs toward an unspecified and hypothetical predictive test for AD. The survey was completed by a convenience sample of 176 individuals, aged 22-77, which was 75% female, 30% African-American, and of which 33% had a family member with AD. The survey revealed that 69% of this sample would elect to obtain predictive testing for AD if the test were 100% accurate. Individuals were more likely to desire predictive testing if they had an a priori belief that they would develop AD (p = 0.0001), had a lower educational level (p = 0.003), were worried that they would develop AD (p = 0.02), had a self-defined history of depression (p = 0.04), and had a family member with AD (p = 0.04). However, the desire for predictive testing was not significantly associated with age, gender, ethnicity, or income. The desire to obtain predictive testing for AD decreased as the assumed accuracy of the hypothetical test decreased. A better short-term strategy for early detection of AD may be computer-based neuropsychological screening of at-risk (older aged) individuals to identify very early cognitive impairment. Individuals identified in this manner could be referred for diagnostic evaluation and early cases of AD could be identified and treated. A new self-administered, touch-screen, computer-based, neuropsychological screening instrument called Neurobehavioral Evaluation System-3 is described, which may facilitate this type of screening.
Physics-based model for predicting the performance of a miniature wind turbine
NASA Astrophysics Data System (ADS)
Xu, F. J.; Hu, J. Z.; Qiu, Y. P.; Yuan, F. G.
2011-04-01
A comprehensive physics-based model for predicting the performance of the miniature wind turbine (MWT) for power wireless sensor systems was proposed in this paper. An approximation of the power coefficient of the turbine rotor was made after the turbine rotor performance was measured. Incorporation of the approximation with the equivalent circuit model which was proposed according to the principles of the MWT, the overall system performance of the MWT was predicted. To demonstrate the prediction, the MWT system comprised of a 7.6 cm thorgren plastic propeller as turbine rotor and a DC motor as generator was designed and its performance was tested experimentally. The predicted output voltage, power and system efficiency are matched well with the tested results, which imply that this study holds promise in estimating and optimizing the performance of the MWT.
Wald Sequential Probability Ratio Test for Analysis of Orbital Conjunction Data
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Markley, F. Landis; Gold, Dara
2013-01-01
We propose a Wald Sequential Probability Ratio Test for analysis of commonly available predictions associated with spacecraft conjunctions. Such predictions generally consist of a relative state and relative state error covariance at the time of closest approach, under the assumption that prediction errors are Gaussian. We show that under these circumstances, the likelihood ratio of the Wald test reduces to an especially simple form, involving the current best estimate of collision probability, and a similar estimate of collision probability that is based on prior assumptions about the likelihood of collision.
Neuropsychological Testing Predicts Cerebrospinal Fluid Aβ in Mild Cognitive Impairment (MCI)
Kandel, Benjamin M.; Avants, Brian B.; Gee, James C.; Arnold, Steven E.; Wolk, David A.
2015-01-01
Background Psychometric tests predict conversion of Mild Cognitive Impairment (MCI) to probable Alzheimer's Disease (AD). Because the definition of clinical AD relies on those same psychometric tests, the ability of these tests to identify underlying AD pathology remains unclear. Objective To determine the degree to which psychometric testing predicts molecular evidence of AD amyloid pathology, as indicated by CSF Aβ1–42, in patients with MCI, as compared to neuroimaging biomarkers. Methods We identified 408 MCI subjects with CSF Aβ levels, psychometric test data, FDG-PET scans, and acceptable volumetric MR scans from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). We used psychometric tests and imaging biomarkers in univariate and multivariate models to predict Aβ status. Results The 30-minute delayed recall score of the Rey Auditory Verbal Learning Test (AVLT) was the best predictor of Aβ status among the psychometric tests, achieving an AUC of 0.67±0.02 and odds ratio of 2.5±0.4. FDG-PET was the best imaging-based biomarker (AUC 0.67±0.03, OR 3.2±1.2), followed by hippocampal volume (AUC 0.64±0.02,,OR 2.4±0.3). A multivariate analysis based on the psychometric tests improved on the univariate predictors, achieving an AUC of 0.68±0.03 (OR 3.38±1.2). Adding imaging biomarkers to the multivariate analysis did not improve the AUC. Conclusion Psychometric tests perform as well as imaging biomarkers to predict presence of molecular markers of AD pathology in MCI patients and should be considered in the determination of the likelihood that MCI is due to AD. PMID:25881908
Chan, Wing Cheuk; Papaconstantinou, Dean; Lee, Mildred; Telfer, Kendra; Jo, Emmanuel; Drury, Paul L; Tobias, Martin
2018-05-01
To validate the New Zealand Ministry of Health (MoH) Virtual Diabetes Register (VDR) using longitudinal laboratory results and to develop an improved algorithm for estimating diabetes prevalence at a population level. The assigned diabetes status of individuals based on the 2014 version of the MoH VDR is compared to the diabetes status based on the laboratory results stored in the Auckland regional laboratory result repository (TestSafe) using the New Zealand diabetes diagnostic criteria. The existing VDR algorithm is refined by reviewing the sensitivity and positive predictive value of the each of the VDR algorithm rules individually and as a combination. The diabetes prevalence estimate based on the original 2014 MoH VDR was 17% higher (n = 108,505) than the corresponding TestSafe prevalence estimate (n = 92,707). Compared to the diabetes prevalence based on TestSafe, the original VDR has a sensitivity of 89%, specificity of 96%, positive predictive value of 76% and negative predictive value of 98%. The modified VDR algorithm has improved the positive predictive value by 6.1% and the specificity by 1.4% with modest reductions in sensitivity of 2.2% and negative predictive value of 0.3%. At an aggregated level the overall diabetes prevalence estimated by the modified VDR is 5.7% higher than the corresponding estimate based on TestSafe. The Ministry of Health Virtual Diabetes Register algorithm has been refined to provide a more accurate diabetes prevalence estimate at a population level. The comparison highlights the potential value of a national population long term condition register constructed from both laboratory results and administrative data. Copyright © 2018 Elsevier B.V. All rights reserved.
Brase, Jan C.; Kronenwett, Ralf; Petry, Christoph; Denkert, Carsten; Schmidt, Marcus
2013-01-01
Several multigene tests have been developed for breast cancer patients to predict the individual risk of recurrence. Most of the first generation tests rely on proliferation-associated genes and are commonly carried out in central reference laboratories. Here, we describe the development of a second generation multigene assay, the EndoPredict test, a prognostic multigene expression test for estrogen receptor (ER) positive, human epidermal growth factor receptor (HER2) negative (ER+/HER2−) breast cancer patients. The EndoPredict gene signature was initially established in a large high-throughput microarray-based screening study. The key steps for biomarker identification are discussed in detail, in comparison to the establishment of other multigene signatures. After biomarker selection, genes and algorithms were transferred to a diagnostic platform (reverse transcription quantitative PCR (RT-qPCR)) to allow for assaying formalin-fixed, paraffin-embedded (FFPE) samples. A comprehensive analytical validation was performed and a prospective proficiency testing study with seven pathological laboratories finally proved that EndoPredict can be reliably used in the decentralized setting. Three independent large clinical validation studies (n = 2,257) demonstrated that EndoPredict offers independent prognostic information beyond current clinicopathological parameters and clinical guidelines. The review article summarizes several important steps that should be considered for the development process of a second generation multigene test and offers a means for transferring a microarray signature from the research laboratory to clinical practice. PMID:27605191
I-TASSER: fully automated protein structure prediction in CASP8.
Zhang, Yang
2009-01-01
The I-TASSER algorithm for 3D protein structure prediction was tested in CASP8, with the procedure fully automated in both the Server and Human sections. The quality of the server models is close to that of human ones but the human predictions incorporate more diverse templates from other servers which improve the human predictions in some of the distant homology targets. For the first time, the sequence-based contact predictions from machine learning techniques are found helpful for both template-based modeling (TBM) and template-free modeling (FM). In TBM, although the accuracy of the sequence based contact predictions is on average lower than that from template-based ones, the novel contacts in the sequence-based predictions, which are complementary to the threading templates in the weakly or unaligned regions, are important to improve the global and local packing in these regions. Moreover, the newly developed atomic structural refinement algorithm was tested in CASP8 and found to improve the hydrogen-bonding networks and the overall TM-score, which is mainly due to its ability of removing steric clashes so that the models can be generated from cluster centroids. Nevertheless, one of the major issues of the I-TASSER pipeline is the model selection where the best models could not be appropriately recognized when the correct templates are detected only by the minority of the threading algorithms. There are also problems related with domain-splitting and mirror image recognition which mainly influences the performance of I-TASSER modeling in the FM-based structure predictions. Copyright 2009 Wiley-Liss, Inc.
Jonas, Susanna; Wild, Claudia; Schamberger, Chantal
2003-02-01
The aim of this health technology assessment was to analyse the current scientific and genetic counselling on predictive genetic testing for hereditary breast and colorectal cancer. Predictive genetic testing will be available for several common diseases in the future and questions related to financial issues and quality standards will be raised. This report is based on a systematic/nonsystematic literature search in several databases (e.g. EmBase, Medline, Cochrane Library) and on a specific health technology assessment report (CCOHTA) and review (American Gastroenterological Ass.), respectively. Laboratory test methods, early detection methods and the benefit from prophylactic interventions were analysed and social consequences interpreted. Breast and colorectal cancer are counted among the most frequently cancer diseases. Most of them are based on random accumulation of risk factors, 5-10% show a familial determination. A hereditary modified gene is responsible for the increased cancer risk. In these families, high tumour frequency, young age at diagnosis and multiple primary tumours are remarkable. GENETIC DIAGNOSIS: Sequence analysis is the gold standard. Denaturing high performance liquid chromatography is a quick alternative method. The identification of the responsible gene defect in an affected family member is important. If the test result is positive there is an uncertainty whether the disease will develop or not, when and in which degree, which is founded in the geno-/phenotype correlation. The individual risk estimation is based upon empirical evidence. The test results affect the whole family. Currently, primary prevention is possible for familial adenomatous polyposis (celecoxib, prophylactic colectomy) and for hereditary mamma carcinoma (prophylactic mastectomy). The so-called preventive medical check-ups are early detection examinations. The evidence about early detection methods for colorectal cancer is better than for breast cancer. Prophylactic mastectomy (PM) reduces the relative breast cancer risk by approximately 90%. The question is if PM has an impact on mortality. The acceptance of PM is culture-dependent. Colectomy can be used as a prophylactic (FAP) and therapeutic method. After surgery, the cancer risk remains high and so early detection examinations are still necessary. EVIDENCE-BASED STATEMENTS: The evidence is often fragmentary and of limited quality. For objective test result presentation information about sensitivity, specificity, positive predictive value, and number needed to screen and treat, respectively, are necessary. New identification of mutations and demand will lead to an increase of predictive genetic counselling and testing. There is a gap between predictive genetic diagnosis and prediction, prevention, early detection and surgical interventions. These circumstances require a basic strategy. Since predictive genetic diagnosis is a very sensitive issue it is important to deal with it carefully in order to avoid inappropriate hopes. Thus, media, experts and politicians need to consider opportunities and limitations in their daily decision-making processes.
Testing prediction methods: Earthquake clustering versus the Poisson model
Michael, A.J.
1997-01-01
Testing earthquake prediction methods requires statistical techniques that compare observed success to random chance. One technique is to produce simulated earthquake catalogs and measure the relative success of predicting real and simulated earthquakes. The accuracy of these tests depends on the validity of the statistical model used to simulate the earthquakes. This study tests the effect of clustering in the statistical earthquake model on the results. Three simulation models were used to produce significance levels for a VLF earthquake prediction method. As the degree of simulated clustering increases, the statistical significance drops. Hence, the use of a seismicity model with insufficient clustering can lead to overly optimistic results. A successful method must pass the statistical tests with a model that fully replicates the observed clustering. However, a method can be rejected based on tests with a model that contains insufficient clustering. U.S. copyright. Published in 1997 by the American Geophysical Union.
Prediction of Spacecraft Vibration using Acceleration and Force Envelopes
NASA Technical Reports Server (NTRS)
Gordon, Scott; Kaufman, Daniel; Kern, Dennis; Scharton, Terry
2009-01-01
The base forces in the GLAST X- and Z-axis sine vibration tests were similar to those derived using generic inputs (from users guide and handbook), but the base forces in the sine test were generally greater than the flight data. Basedrive analyses using envelopes of flight acceleration data provided more accurate predictions of the base force than generic inputs, and as expected, using envelopes of both the flight acceleration and force provided even more accurate predictions The GLAST spacecraft interface accelerations and forces measured during the MECO transient were relatively low in the 60 to 150 Hz regime. One may expect the flight forces measured at the base of various spacecraft to be more dependent on the mass, frequencies, etc. of the spacecraft than are the corresponding interface acceleration data, which may depend more on the launch vehicle configuration.
Chuke, Stella O; Yen, Nguyen Thi Ngoc; Laserson, Kayla F; Phuoc, Nguyen Huu; Trinh, Nguyen An; Nhung, Duong Thi Cam; Mai, Vo Thi Chi; Qui, An Dang; Hai, Hoang Hoa; Loan, Le Thien Huong; Jones, Warren G; Whitworth, William C; Shah, J Jina; Painter, John A; Mazurek, Gerald H; Maloney, Susan A
2014-01-01
Objective. Use of tuberculin skin tests (TSTs) and interferon gamma release assays (IGRAs) as part of tuberculosis (TB) screening among immigrants from high TB-burden countries has not been fully evaluated. Methods. Prevalence of Mycobacterium tuberculosis infection (MTBI) based on TST, or the QuantiFERON-TB Gold test (QFT-G), was determined among immigrant applicants in Vietnam bound for the United States (US); factors associated with test results and discordance were assessed; predictive values of TST and QFT-G for identifying chest radiographs (CXRs) consistent with TB were calculated. Results. Of 1,246 immigrant visa applicants studied, 57.9% were TST positive, 28.3% were QFT-G positive, and test agreement was 59.4%. Increasing age was associated with positive TST results, positive QFT-G results, TST-positive but QFT-G-negative discordance, and abnormal CXRs consistent with TB. Positive predictive values of TST and QFT-G for an abnormal CXR were 25.9% and 25.6%, respectively. Conclusion. The estimated prevalence of MTBI among US-bound visa applicants in Vietnam based on TST was twice that based on QFT-G, and 14 times higher than a TST-based estimate of MTBI prevalence reported for the general US population in 2000. QFT-G was not better than TST at predicting abnormal CXRs consistent with TB.
Carreiro, André V; Amaral, Pedro M T; Pinto, Susana; Tomás, Pedro; de Carvalho, Mamede; Madeira, Sara C
2015-12-01
Amyotrophic Lateral Sclerosis (ALS) is a devastating disease and the most common neurodegenerative disorder of young adults. ALS patients present a rapidly progressive motor weakness. This usually leads to death in a few years by respiratory failure. The correct prediction of respiratory insufficiency is thus key for patient management. In this context, we propose an innovative approach for prognostic prediction based on patient snapshots and time windows. We first cluster temporally-related tests to obtain snapshots of the patient's condition at a given time (patient snapshots). Then we use the snapshots to predict the probability of an ALS patient to require assisted ventilation after k days from the time of clinical evaluation (time window). This probability is based on the patient's current condition, evaluated using clinical features, including functional impairment assessments and a complete set of respiratory tests. The prognostic models include three temporal windows allowing to perform short, medium and long term prognosis regarding progression to assisted ventilation. Experimental results show an area under the receiver operating characteristics curve (AUC) in the test set of approximately 79% for time windows of 90, 180 and 365 days. Creating patient snapshots using hierarchical clustering with constraints outperforms the state of the art, and the proposed prognostic model becomes the first non population-based approach for prognostic prediction in ALS. The results are promising and should enhance the current clinical practice, largely supported by non-standardized tests and clinicians' experience. Copyright © 2015 Elsevier Inc. All rights reserved.
Prediction of preterm birth in twin gestations using biophysical and biochemical tests
Conde-Agudelo, Agustin; Romero, Roberto
2018-01-01
The objective of this study was to determine the performance of biophysical and biochemical tests for the prediction of preterm birth in both asymptomatic and symptomatic women with twin gestations. We identified a total of 19 tests proposed to predict preterm birth, mainly in asymptomatic women. In these women, a single measurement of cervical length with transvaginal ultrasound before 25 weeks of gestation appears to be a good test to predict preterm birth. Its clinical potential is enhanced by the evidence that vaginal progesterone administration in asymptomatic women with twin gestations and a short cervix reduces neonatal morbidity and mortality associated with spontaneous preterm delivery. Other tests proposed for the early identification of asymptomatic women at increased risk of preterm birth showed minimal to moderate predictive accuracy. None of the tests evaluated in this review meet the criteria to be considered clinically useful to predict preterm birth among patients with an episode of preterm labor. However, a negative cervicovaginal fetal fibronectin test could be useful in identifying women who are not at risk for delivering within the next week, which could avoid unnecessary hospitalization and treatment. This review underscores the need to develop accurate tests for predicting preterm birth in twin gestations. Moreover, the use of interventions in these patients based on test results should be associated with the improvement of perinatal outcomes. PMID:25072736
Prediction of preterm birth in twin gestations using biophysical and biochemical tests.
Conde-Agudelo, Agustin; Romero, Roberto
2014-12-01
The objective of this study was to determine the performance of biophysical and biochemical tests for the prediction of preterm birth in both asymptomatic and symptomatic women with twin gestations. We identified a total of 19 tests proposed to predict preterm birth, mainly in asymptomatic women. In these women, a single measurement of cervical length with transvaginal ultrasound before 25 weeks of gestation appears to be a good test to predict preterm birth. Its clinical potential is enhanced by the evidence that vaginal progesterone administration in asymptomatic women with twin gestations and a short cervix reduces neonatal morbidity and mortality associated with spontaneous preterm delivery. Other tests proposed for the early identification of asymptomatic women at increased risk of preterm birth showed minimal to moderate predictive accuracy. None of the tests evaluated in this review meet the criteria to be considered clinically useful to predict preterm birth among patients with an episode of preterm labor. However, a negative cervicovaginal fetal fibronectin test could be useful in identifying women who are not at risk for delivering within the next week, which could avoid unnecessary hospitalization and treatment. This review underscores the need to develop accurate tests for predicting preterm birth in twin gestations. Moreover, the use of interventions in these patients based on test results should be associated with the improvement of perinatal outcomes. Copyright © 2014. Published by Elsevier Inc.
Predicting fatty acid profiles in blood based on food intake and the FADS1 rs174546 SNP.
Hallmann, Jacqueline; Kolossa, Silvia; Gedrich, Kurt; Celis-Morales, Carlos; Forster, Hannah; O'Donovan, Clare B; Woolhead, Clara; Macready, Anna L; Fallaize, Rosalind; Marsaux, Cyril F M; Lambrinou, Christina-Paulina; Mavrogianni, Christina; Moschonis, George; Navas-Carretero, Santiago; San-Cristobal, Rodrigo; Godlewska, Magdalena; Surwiłło, Agnieszka; Mathers, John C; Gibney, Eileen R; Brennan, Lorraine; Walsh, Marianne C; Lovegrove, Julie A; Saris, Wim H M; Manios, Yannis; Martinez, Jose Alfredo; Traczyk, Iwona; Gibney, Michael J; Daniel, Hannelore
2015-12-01
A high intake of n-3 PUFA provides health benefits via changes in the n-6/n-3 ratio in blood. In addition to such dietary PUFAs, variants in the fatty acid desaturase 1 (FADS1) gene are also associated with altered PUFA profiles. We used mathematical modeling to predict levels of PUFA in whole blood, based on multiple hypothesis testing and bootstrapped LASSO selected food items, anthropometric and lifestyle factors, and the rs174546 genotypes in FADS1 from 1607 participants (Food4Me Study). The models were developed using data from the first reported time point (training set) and their predictive power was evaluated using data from the last reported time point (test set). Among other food items, fish, pizza, chicken, and cereals were identified as being associated with the PUFA profiles. Using these food items and the rs174546 genotypes as predictors, models explained 26-43% of the variability in PUFA concentrations in the training set and 22-33% in the test set. Selecting food items using multiple hypothesis testing is a valuable contribution to determine predictors, as our models' predictive power is higher compared to analogue studies. As unique feature, we additionally confirmed our models' power based on a test set. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Magnuson, Brian
A proof-of-concept software-in-the-loop study is performed to assess the accuracy of predicted net and charge-gaining energy consumption for potential effective use in optimizing powertrain management of hybrid vehicles. With promising results of improving fuel efficiency of a thermostatic control strategy for a series, plug-ing, hybrid-electric vehicle by 8.24%, the route and speed prediction machine learning algorithms are redesigned and implemented for real- world testing in a stand-alone C++ code-base to ingest map data, learn and predict driver habits, and store driver data for fast startup and shutdown of the controller or computer used to execute the compiled algorithm. Speed prediction is performed using a multi-layer, multi-input, multi- output neural network using feed-forward prediction and gradient descent through back- propagation training. Route prediction utilizes a Hidden Markov Model with a recurrent forward algorithm for prediction and multi-dimensional hash maps to store state and state distribution constraining associations between atomic road segments and end destinations. Predicted energy is calculated using the predicted time-series speed and elevation profile over the predicted route and the road-load equation. Testing of the code-base is performed over a known road network spanning 24x35 blocks on the south hill of Spokane, Washington. A large set of training routes are traversed once to add randomness to the route prediction algorithm, and a subset of the training routes, testing routes, are traversed to assess the accuracy of the net and charge-gaining predicted energy consumption. Each test route is traveled a random number of times with varying speed conditions from traffic and pedestrians to add randomness to speed prediction. Prediction data is stored and analyzed in a post process Matlab script. The aggregated results and analysis of all traversals of all test routes reflect the performance of the Driver Prediction algorithm. The error of average energy gained through charge-gaining events is 31.3% and the error of average net energy consumed is 27.3%. The average delta and average standard deviation of the delta of predicted energy gained through charge-gaining events is 0.639 and 0.601 Wh respectively for individual time-series calculations. Similarly, the average delta and average standard deviation of the delta of the predicted net energy consumed is 0.567 and 0.580 Wh respectively for individual time-series calculations. The average delta and standard deviation of the delta of the predicted speed is 1.60 and 1.15 respectively also for the individual time-series measurements. The percentage of accuracy of route prediction is 91%. Overall, test routes are traversed 151 times for a total test distance of 276.4 km.
NASA Astrophysics Data System (ADS)
Olson, John R.
This is a quasi-experimental study of 261 first year high school students that analyzes gains made through the use of calculator based rangers attached to calculators. The study has qualitative components but is based on quantitative tests. Biechner's TUG-K test was used for the pretest, posttest, and post-posttest. The population was divided into one group that predicted the results before using the CBRs and another that did not predict first but completed the same activities. The data for the groups was further disaggregated into learning style groups (based on Kolb's Learning Styles Inventory), type of class (advanced vs. general physics), and gender. Four instructors used the labs developed by the author for this study and created significant differences between the groups by instructor based on interviews, participant observation and one way ANOVA. No significant differences were found between learning styles based on MANOVA. No significant differences were found between predict and nonpredict groups for the one way ANOVAs or MANOVA, however, some differences do exist as measured by a survey and participant observation. Significant differences do exist between gender and type of class (advanced/general) based on one way ANOVA and MANOVA. The males outscored the females on all tests and the advanced physics scored higher than the general physics on all tests. The advanced physics scoring higher was expected but the difference between genders was not.
NASA Astrophysics Data System (ADS)
Jiang, Jiaqi; Gu, Rongbao
2016-04-01
This paper generalizes the method of traditional singular value decomposition entropy by incorporating orders q of Rényi entropy. We analyze the predictive power of the entropy based on trajectory matrix using Shanghai Composite Index and Dow Jones Index data in both static test and dynamic test. In the static test on SCI, results of global granger causality tests all turn out to be significant regardless of orders selected. But this entropy fails to show much predictability in American stock market. In the dynamic test, we find that the predictive power can be significantly improved in SCI by our generalized method but not in DJI. This suggests that noises and errors affect SCI more frequently than DJI. In the end, results obtained using different length of sliding window also corroborate this finding.
A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...
A Java-based web service is being developed within the US EPA’s Chemistry Dashboard to provide real time estimates of toxicity values and physical properties. WebTEST can generate toxicity predictions directly from a simple URL which includes the endpoint, QSAR method, and ...
Afzal, Muhammad Sohail
2016-09-18
In Pakistan which ranked second in terms of hepatitis C virus (HCV) infection, it is highly needed to have an established diagnostic test for antiviral therapy response prediction. Interleukin 28B (IL-28B) genetic testing is widely used throughout the world for interferon based therapy prediction for HCV patients and is quite helpful not only for health care workers but also for the patients. There is a strong relationship between single nucleotide polymorphisms at or near the IL-28B gene and the sustained virological response with pegylated interferon plus ribavirin treatment for chronic hepatitis C. Pakistan is a resource limited country, with very low per capita income and there is no proper social security (health insurance) system. The allocated health budget by the government is very low and is used on other health emergencies like polio virus and dengue virus infection. Therefore it is proposed that there should be a well established diagnostic test on the basis of IL-28B which can predict the antiviral therapy response to strengthen health care set-up of Pakistan. This test once established will help in better management of HCV infected patients.
Xie, Dan; Li, Ao; Wang, Minghui; Fan, Zhewen; Feng, Huanqing
2005-01-01
Subcellular location of a protein is one of the key functional characters as proteins must be localized correctly at the subcellular level to have normal biological function. In this paper, a novel method named LOCSVMPSI has been introduced, which is based on the support vector machine (SVM) and the position-specific scoring matrix generated from profiles of PSI-BLAST. With a jackknife test on the RH2427 data set, LOCSVMPSI achieved a high overall prediction accuracy of 90.2%, which is higher than the prediction results by SubLoc and ESLpred on this data set. In addition, prediction performance of LOCSVMPSI was evaluated with 5-fold cross validation test on the PK7579 data set and the prediction results were consistently better than the previous method based on several SVMs using composition of both amino acids and amino acid pairs. Further test on the SWISSPROT new-unique data set showed that LOCSVMPSI also performed better than some widely used prediction methods, such as PSORTII, TargetP and LOCnet. All these results indicate that LOCSVMPSI is a powerful tool for the prediction of eukaryotic protein subcellular localization. An online web server (current version is 1.3) based on this method has been developed and is freely available to both academic and commercial users, which can be accessed by at . PMID:15980436
Machine learning-based methods for prediction of linear B-cell epitopes.
Wang, Hsin-Wei; Pai, Tun-Wen
2014-01-01
B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.
Bogard, Matthieu; Ravel, Catherine; Paux, Etienne; Bordes, Jacques; Balfourier, François; Chapman, Scott C.; Le Gouis, Jacques; Allard, Vincent
2014-01-01
Prediction of wheat phenology facilitates the selection of cultivars with specific adaptations to a particular environment. However, while QTL analysis for heading date can identify major genes controlling phenology, the results are limited to the environments and genotypes tested. Moreover, while ecophysiological models allow accurate predictions in new environments, they may require substantial phenotypic data to parameterize each genotype. Also, the model parameters are rarely related to all underlying genes, and all the possible allelic combinations that could be obtained by breeding cannot be tested with models. In this study, a QTL-based model is proposed to predict heading date in bread wheat (Triticum aestivum L.). Two parameters of an ecophysiological model (V sat and P base, representing genotype vernalization requirements and photoperiod sensitivity, respectively) were optimized for 210 genotypes grown in 10 contrasting location × sowing date combinations. Multiple linear regression models predicting V sat and P base with 11 and 12 associated genetic markers accounted for 71 and 68% of the variance of these parameters, respectively. QTL-based V sat and P base estimates were able to predict heading date of an independent validation data set (88 genotypes in six location × sowing date combinations) with a root mean square error of prediction of 5 to 8.6 days, explaining 48 to 63% of the variation for heading date. The QTL-based model proposed in this study may be used for agronomic purposes and to assist breeders in suggesting locally adapted ideotypes for wheat phenology. PMID:25148833
Martin, Benjamin T; Jager, Tjalling; Nisbet, Roger M; Preuss, Thomas G; Grimm, Volker
2013-04-01
Individual-based models (IBMs) are increasingly used to link the dynamics of individuals to higher levels of biological organization. Still, many IBMs are data hungry, species specific, and time-consuming to develop and analyze. Many of these issues would be resolved by using general theories of individual dynamics as the basis for IBMs. While such theories have frequently been examined at the individual level, few cross-level tests exist that also try to predict population dynamics. Here we performed a cross-level test of dynamic energy budget (DEB) theory by parameterizing an individual-based model using individual-level data of the water flea, Daphnia magna, and comparing the emerging population dynamics to independent data from population experiments. We found that DEB theory successfully predicted population growth rates and peak densities but failed to capture the decline phase. Further assumptions on food-dependent mortality of juveniles were needed to capture the population dynamics after the initial population peak. The resulting model then predicted, without further calibration, characteristic switches between small- and large-amplitude cycles, which have been observed for Daphnia. We conclude that cross-level tests help detect gaps in current individual-level theories and ultimately will lead to theory development and the establishment of a generic basis for individual-based models and ecology.
An analysis of a digital variant of the Trail Making Test using machine learning techniques.
Dahmen, Jessamyn; Cook, Diane; Fellows, Robert; Schmitter-Edgecombe, Maureen
2017-01-01
The goal of this work is to develop a digital version of a standard cognitive assessment, the Trail Making Test (TMT), and assess its utility. This paper introduces a novel digital version of the TMT and introduces a machine learning based approach to assess its capabilities. Using digital Trail Making Test (dTMT) data collected from (N = 54) older adult participants as feature sets, we use machine learning techniques to analyze the utility of the dTMT and evaluate the insights provided by the digital features. Predicted TMT scores correlate well with clinical digital test scores (r = 0.98) and paper time to completion scores (r = 0.65). Predicted TICS exhibited a small correlation with clinically derived TICS scores (r = 0.12 Part A, r = 0.10 Part B). Predicted FAB scores exhibited a small correlation with clinically derived FAB scores (r = 0.13 Part A, r = 0.29 for Part B). Digitally derived features were also used to predict diagnosis (AUC of 0.65). Our findings indicate that the dTMT is capable of measuring the same aspects of cognition as the paper-based TMT. Furthermore, the dTMT's additional data may be able to help monitor other cognitive processes not captured by the paper-based TMT alone.
Koeda, Yorihiko; Tanaka, Fumitaka; Segawa, Toshie; Ohta, Mutsuko; Ohsawa, Masaki; Tanno, Kozo; Makita, Shinji; Ishibashi, Yasuhiro; Itai, Kazuyoshi; Omama, Shin-Ichi; Onoda, Toshiyuki; Sakata, Kiyomi; Ogasawara, Kuniaki; Okayama, Akira; Nakamura, Motoyuki
2016-05-12
This study compared the combination of estimated glomerular filtration rate (eGFR) and urine albumin-to-creatinine ratio (UACR) vs. eGFR and urine protein reagent strip testing to determine chronic kidney disease (CKD) prevalence, and each method's ability to predict the risk for cardiovascular events in the general Japanese population. Baseline data including eGFR, UACR, and urine dipstick tests were obtained from the general population (n = 22 975). Dipstick test results (negative, trace, positive) were allocated to three levels of UACR (<30, 30-300, >300), respectively. In accordance with Kidney Disease Improving Global Outcomes CKD prognosis heat mapping, the cohort was classified into four risk grades (green: grade 1; yellow: grade 2; orange: grade 3, red: grade 4) based on baseline eGFR and UACR levels or dipstick tests. During the mean follow-up period of 5.6 years, 708 new onset cardiovascular events were recorded. For CKD identified by eGFR and dipstick testing (dipstick test ≥ trace and eGFR <60 mL/min/1.73 m(2)), the incidence of CKD was found to be 9 % in the general population. In comparison to non-CKD (grade 1), although cardiovascular risk was significantly higher in risk grades ≥3 (relative risk (RR) = 1.70; 95 % CI: 1.28-2.26), risk predictive ability was not significant in risk grade 2 (RR = 1.20; 95 % CI: 0.95-1.52). When CKD was defined by eGFR and UACR (UACR ≥30 mg/g Cr and eGFR <60 mL/min/1.73 m(2)), prevalence was found to be 29 %. Predictive ability in risk grade 2 (RR = 1.41; 95 % CI: 1.19-1.66) and risk grade ≥3 (RR = 1.76; 95 % CI: 1.37-2.28) were both significantly greater than for non-CKD. Reclassification analysis showed a significant improvement in risk predictive abilities when CKD risk grading was based on UACR rather than on dipstick testing in this population (p < 0.001). Although prevalence of CKD was higher when detected by UACR rather than urine dipstick testing, the predictive ability for cardiovascular events from UACR-based risk grading was superior to that of dipstick-based risk grading in the general population.
Fatigue Life Methodology for Bonded Composite Skin/Stringer Configurations
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Paris, Isabelle L.; OBrien, T. Kevin; Minguet, Pierre J.
2001-01-01
A methodology is presented for determining the fatigue life of composite structures based on fatigue characterization data and geometric nonlinear finite element (FE) analyses. To demonstrate the approach, predicted results were compared to fatigue tests performed on specimens which represented a tapered composite flange bonded onto a composite skin. In a first step, tension tests were performed to evaluate the debonding mechanisms between the flange and the skin. In a second step, a 2D FE model was developed to analyze the tests. To predict matrix cracking onset, the relationship between the tension load and the maximum principal stresses transverse to the fiber direction was determined through FE analysis. Transverse tension fatigue life data were used to -enerate an onset fatigue life P-N curve for matrix cracking. The resulting prediction was in good agreement with data from the fatigue tests. In a third step, a fracture mechanics approach based on FE analysis was used to determine the relationship between the tension load and the critical energy release rate. Mixed mode energy release rate fatigue life data were used to create a fatigue life onset G-N curve for delamination. The resulting prediction was in good agreement with data from the fatigue tests. Further, the prediction curve for cumulative life to failure was generated from the previous onset fatigue life curves. The results showed that the methodology offers a significant potential to Predict cumulative fatigue life of composite structures.
NASA Technical Reports Server (NTRS)
Mullen, C. R.; Bender, R. L.; Bevill, R. L.; Reardon, J.; Hartley, L.
1972-01-01
A handbook containing a summary of model and flight test base heating data from the S-1, S-1B, S-4, S-1C, and S-2 stages is presented. A review of the available prediction methods is included. Experimental data are provided to make the handbook a single source of Saturn base heating data which can be used for preliminary base heating design predictions of launch vehicles.
Predicting PDZ domain mediated protein interactions from structure
2013-01-01
Background PDZ domains are structural protein domains that recognize simple linear amino acid motifs, often at protein C-termini, and mediate protein-protein interactions (PPIs) in important biological processes, such as ion channel regulation, cell polarity and neural development. PDZ domain-peptide interaction predictors have been developed based on domain and peptide sequence information. Since domain structure is known to influence binding specificity, we hypothesized that structural information could be used to predict new interactions compared to sequence-based predictors. Results We developed a novel computational predictor of PDZ domain and C-terminal peptide interactions using a support vector machine trained with PDZ domain structure and peptide sequence information. Performance was estimated using extensive cross validation testing. We used the structure-based predictor to scan the human proteome for ligands of 218 PDZ domains and show that the predictions correspond to known PDZ domain-peptide interactions and PPIs in curated databases. The structure-based predictor is complementary to the sequence-based predictor, finding unique known and novel PPIs, and is less dependent on training–testing domain sequence similarity. We used a functional enrichment analysis of our hits to create a predicted map of PDZ domain biology. This map highlights PDZ domain involvement in diverse biological processes, some only found by the structure-based predictor. Based on this analysis, we predict novel PDZ domain involvement in xenobiotic metabolism and suggest new interactions for other processes including wound healing and Wnt signalling. Conclusions We built a structure-based predictor of PDZ domain-peptide interactions, which can be used to scan C-terminal proteomes for PDZ interactions. We also show that the structure-based predictor finds many known PDZ mediated PPIs in human that were not found by our previous sequence-based predictor and is less dependent on training–testing domain sequence similarity. Using both predictors, we defined a functional map of human PDZ domain biology and predict novel PDZ domain function. Users may access our structure-based and previous sequence-based predictors at http://webservice.baderlab.org/domains/POW. PMID:23336252
Donahue, D A; Kaufman, L E; Avalos, J; Simion, F A; Cerven, D R
2011-03-01
The Chorioallantoic Membrane Vascular Assay (CAMVA) and Bovine Corneal Opacity and Permeability (BCOP) test are widely used to predict ocular irritation potential for consumer-use products. These in vitro assays do not require live animals, produce reliable predictive data for defined applicability domains compared to the Draize rabbit eye test, and are rapid and inexpensive. Data from 304 CAMVA and/or BCOP studies (319 formulations) were surveyed to determine the feasibility of predicting ocular irritation potential for various formulations. Hair shampoos, skin cleansers, and ethanol-based hair styling sprays were repeatedly predicted to be ocular irritants (accuracy rate=0.90-1.00), with skin cleanser and hair shampoo irritation largely dependent on surfactant species and concentration. Conversely, skin lotions/moisturizers and hair styling gels/lotions were repeatedly predicted to be non-irritants (accuracy rate=0.92 and 0.82, respectively). For hair shampoos, ethanol-based hair stylers, skin cleansers, and skin lotions/moisturizers, future ocular irritation testing (i.e., CAMVA/BCOP) can be nearly eliminated if new formulations are systematically compared to those previously tested using a defined decision tree. For other tested product categories, new formulations should continue to be evaluated in CAMVA/BCOP for ocular irritation potential because either the historical data exhibit significant variability (hair conditioners and mousses) or the historical sample size is too small to permit definitive conclusions (deodorants, make-up removers, massage oils, facial masks, body sprays, and other hair styling products). All decision tree conclusions should be made within a conservative weight-of-evidence context, considering the reported limitations of the BCOP test for alcohols, ketones, and solids. Copyright © 2010 Elsevier Ltd. All rights reserved.
Protein asparagine deamidation prediction based on structures with machine learning methods.
Jia, Lei; Sun, Yaxiong
2017-01-01
Chemical stability is a major concern in the development of protein therapeutics due to its impact on both efficacy and safety. Protein "hotspots" are amino acid residues that are subject to various chemical modifications, including deamidation, isomerization, glycosylation, oxidation etc. A more accurate prediction method for potential hotspot residues would allow their elimination or reduction as early as possible in the drug discovery process. In this work, we focus on prediction models for asparagine (Asn) deamidation. Sequence-based prediction method simply identifies the NG motif (amino acid asparagine followed by a glycine) to be liable to deamidation. It still dominates deamidation evaluation process in most pharmaceutical setup due to its convenience. However, the simple sequence-based method is less accurate and often causes over-engineering a protein. We introduce structure-based prediction models by mining available experimental and structural data of deamidated proteins. Our training set contains 194 Asn residues from 25 proteins that all have available high-resolution crystal structures. Experimentally measured deamidation half-life of Asn in penta-peptides as well as 3D structure-based properties, such as solvent exposure, crystallographic B-factors, local secondary structure and dihedral angles etc., were used to train prediction models with several machine learning algorithms. The prediction tools were cross-validated as well as tested with an external test data set. The random forest model had high enrichment in ranking deamidated residues higher than non-deamidated residues while effectively eliminated false positive predictions. It is possible that such quantitative protein structure-function relationship tools can also be applied to other protein hotspot predictions. In addition, we extensively discussed metrics being used to evaluate the performance of predicting unbalanced data sets such as the deamidation case.
DOE Office of Scientific and Technical Information (OSTI.GOV)
King, R.D.; Srinivasan, A.
1996-10-01
The machine learning program Progol was applied to the problem of forming the structure-activity relationship (SAR) for a set of compounds tested for carcinogenicity in rodent bioassays by the U.S. National Toxicology Program (NTP). Progol is the first inductive logic programming (ILP) algorithm to use a fully relational method for describing chemical structure in SARs, based on using atoms and their bond connectivities. Progol is well suited to forming SARs for carcinogenicity as it is designed to produce easily understandable rules (structural alerts) for sets of noncongeneric compounds. The Progol SAR method was tested by prediction of a set ofmore » compounds that have been widely predicted by other SAR methods (the compounds used in the NTP`s first round of carcinogenesis predictions). For these compounds no method (human or machine) was significantly more accurate than Progol. Progol was the most accurate method that did not use data from biological tests on rodents (however, the difference in accuracy is not significant). The Progol predictions were based solely on chemical structure and the results of tests for Salmonella mutagenicity. Using the full NTP database, the prediction accuracy of Progol was estimated to be 63% ({+-}3%) using 5-fold cross validation. A set of structural alerts for carcinogenesis was automatically generated and the chemical rationale for them investigated-these structural alerts are statistically independent of the Salmonella mutagenicity. Carcinogenicity is predicted for the compounds used in the NTP`s second round of carcinogenesis predictions. The results for prediction of carcinogenesis, taken together with the previous successful applications of predicting mutagenicity in nitroaromatic compounds, and inhibition of angiogenesis by suramin analogues, show that Progol has a role to play in understanding the SARs of cancer-related compounds. 29 refs., 2 figs., 4 tabs.« less
Fatigue criterion to system design, life and reliability
NASA Technical Reports Server (NTRS)
Zaretsky, E. V.
1985-01-01
A generalized methodology to structural life prediction, design, and reliability based upon a fatigue criterion is advanced. The life prediction methodology is based in part on work of W. Weibull and G. Lundberg and A. Palmgren. The approach incorporates the computed life of elemental stress volumes of a complex machine element to predict system life. The results of coupon fatigue testing can be incorporated into the analysis allowing for life prediction and component or structural renewal rates with reasonable statistical certainty.
NASA Astrophysics Data System (ADS)
Nascimento, Luis Alberto Herrmann do
This dissertation presents the implementation and validation of the viscoelastic continuum damage (VECD) model for asphalt mixture and pavement analysis in Brazil. It proposes a simulated damage-to-fatigue cracked area transfer function for the layered viscoelastic continuum damage (LVECD) program framework and defines the model framework's fatigue cracking prediction error for asphalt pavement reliability-based design solutions in Brazil. The research is divided into three main steps: (i) implementation of the simplified viscoelastic continuum damage (S-VECD) model in Brazil (Petrobras) for asphalt mixture characterization, (ii) validation of the LVECD model approach for pavement analysis based on field performance observations, and defining a local simulated damage-to-cracked area transfer function for the Fundao Project's pavement test sections in Rio de Janeiro, RJ, and (iii) validation of the Fundao project local transfer function to be used throughout Brazil for asphalt pavement fatigue cracking predictions, based on field performance observations of the National MEPDG Project's pavement test sections, thereby validating the proposed framework's prediction capability. For the first step, the S-VECD test protocol, which uses controlled-on-specimen strain mode-of-loading, was successfully implemented at the Petrobras and used to characterize Brazilian asphalt mixtures that are composed of a wide range of asphalt binders. This research verified that the S-VECD model coupled with the GR failure criterion is accurate for fatigue life predictions of Brazilian asphalt mixtures, even when very different asphalt binders are used. Also, the applicability of the load amplitude sweep (LAS) test for the fatigue characterization of the asphalt binders was checked, and the effects of different asphalt binders on the fatigue damage properties of the asphalt mixtures was investigated. The LAS test results, modeled according to VECD theory, presented a strong correlation with the asphalt mixtures' fatigue performance. In the second step, the S-VECD test protocol was used to characterize the asphalt mixtures used in the 27 selected Fundao project test sections and subjected to real traffic loading. Thus, the asphalt mixture properties, pavement structure data, traffic loading, and climate were input into the LVECD program for pavement fatigue cracking performance simulations. The simulation results showed good agreement with the field-observed distresses. Then, a damage shift approach, based on the initial simulated damage growth rate, was introduced in order to obtain a unique relationship between the LVECD-simulated shifted damage and the pavement-observed fatigue cracked areas. This correlation was fitted to a power form function and defined as the averaged reduced damage-to-cracked area transfer function. The last step consisted of using the averaged reduced damage-to-cracked area transfer function that was developed in the Fundao project to predict pavement fatigue cracking in 17 National MEPDG project test sections. The procedures for the material characterization and pavement data gathering adopted in this step are similar to those used for the Fundao project simulations. This research verified that the transfer function defined for the Fundao project sections can be used for the fatigue performance predictions of a wide range of pavements all over Brazil, as the predicted and observed cracked areas for the National MEPDG pavements presented good agreement, following the same trends found for the Fundao project pavement sites. Based on the prediction errors determined for all 44 pavement test sections (Fundao and National MEPDG test sections), the proposed framework's prediction capability was determined so that reliability-based solutions can be applied for flexible pavement design. It was concluded that the proposed LVECD program framework has very good fatigue cracking prediction capability.
Using a Gravity Model to Predict Circulation in a Public Library System.
ERIC Educational Resources Information Center
Ottensmann, John R.
1995-01-01
Describes the development of a gravity model based upon principles of spatial interaction to predict the circulation of libraries in the Indianapolis-Marion County Public Library (Indiana). The model effectively predicted past circulation figures and was tested by predicting future library circulation, particularly for a new branch library.…
Nondestructive testing methods to predict effect of degradation on wood : a critical assessment
J. Kaiserlik
1978-01-01
Results are reported for an assessment of methods for predicting strength of wood, wood-based, or related material. Research directly applicable to nondestructive strength prediction was very limited. In wood, strength prediction research is limited to vibration decay, wave attenuation, and multiparameter "degradation models." Nonwood methods with potential...
ERIC Educational Resources Information Center
Moses, Tim
2011-01-01
The purpose of this study was to consider the relationships of prediction, measurement, and scaling invariance when these invariances were simultaneously evaluated in psychometric test data. An approach was developed to evaluate prediction, measurement, and scaling invariance based on linear and nonlinear prediction, measurement, and scaling…
Sun, Yazhen; Fang, Chenze; Wang, Jinchang; Yuan, Xuezhong; Fan, Dong
2018-05-03
Laboratory predictions for the fatigue life of an asphalt mixture under cyclic loading based on the plateau value (PV) of the permanent deformation ratio (PDR) were carried out by three-point bending fatigue tests. The influence of test conditions on the recovery ratio of elastic deformation (RRED), the permanent deformation (PD) and PDR, and the trends of RRED, PD, and PDR were studied. The damage variable was defined by using PDR, and the relation of the fatigue life to PDR was determined by analyzing the damage evolution process. The fatigue equation was established based on the PV of PDR and the fatigue life was predicted by analyzing the relation of the fatigue life to the PV. The results show that the RRED decreases with the increase of the number of loading cycles, and the elastic recovery ability of the asphalt mixture gradually decreases. The two mathematical models proposed are based on the change laws of the RRED, and the PD can well describe the change laws. The RRED or the PD cannot well predict the fatigue life because they do not change monotonously with the fatigue life, and one part of the deformation causes the damage and the other part causes the viscoelastic deformation. The fatigue life decreases with the increase of the PDR. The average PDR in the second stage is taken as the PV, and the fatigue life decreases in a power law with the increase of the PV. The average relative error of the fatigue life predicted by the fatigue equation to the test fatigue life is 5.77%. The fatigue equation based on PV can well predict the fatigue life.
Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Mao, Lei; Jackson, Lisa
2016-10-01
In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.
Probabilistic Analysis of Aircraft Gas Turbine Disk Life and Reliability
NASA Technical Reports Server (NTRS)
Melis, Matthew E.; Zaretsky, Erwin V.; August, Richard
1999-01-01
Two series of low cycle fatigue (LCF) test data for two groups of different aircraft gas turbine engine compressor disk geometries were reanalyzed and compared using Weibull statistics. Both groups of disks were manufactured from titanium (Ti-6Al-4V) alloy. A NASA Glenn Research Center developed probabilistic computer code Probable Cause was used to predict disk life and reliability. A material-life factor A was determined for titanium (Ti-6Al-4V) alloy based upon fatigue disk data and successfully applied to predict the life of the disks as a function of speed. A comparison was made with the currently used life prediction method based upon crack growth rate. Applying an endurance limit to the computer code did not significantly affect the predicted lives under engine operating conditions. Failure location prediction correlates with those experimentally observed in the LCF tests. A reasonable correlation was obtained between the predicted disk lives using the Probable Cause code and a modified crack growth method for life prediction. Both methods slightly overpredict life for one disk group and significantly under predict it for the other.
Lienemann, Kai; Plötz, Thomas; Pestel, Sabine
2008-01-01
The aim of safety pharmacology is early detection of compound-induced side-effects. NMR-based urine analysis followed by multivariate data analysis (metabonomics) identifies efficiently differences between toxic and non-toxic compounds; but in most cases multiple administrations of the test compound are necessary. We tested the feasibility of detecting proximal tubule kidney toxicity and phospholipidosis with metabonomics techniques after single compound administration as an early safety pharmacology approach. Rats were treated orally, intravenously, inhalatively or intraperitoneally with different test compounds. Urine was collected at 0-8 h and 8-24 h after compound administration, and (1)H NMR-patterns were recorded from the samples. Variation of post-processing and feature extraction methods led to different views on the data. Support Vector Machines were trained on these different data sets and then aggregated as experts in an Ensemble. Finally, validity was monitored with a cross-validation study using a training, validation, and test data set. Proximal tubule kidney toxicity could be predicted with reasonable total classification accuracy (85%), specificity (88%) and sensitivity (78%). In comparison to alternative histological studies, results were obtained quicker, compound need was reduced, and very importantly fewer animals were needed. In contrast, the induction of phospholipidosis by the test compounds could not be predicted using NMR-based urine analysis or the previously published biomarker PAG. NMR-based urine analysis was shown to effectively predict proximal tubule kidney toxicity after single compound administration in rats. Thus, this experimental design allows early detection of toxicity risks with relatively low amounts of compound in a reasonably short period of time.
Rapid diagnostic tests for malaria at sites of varying transmission intensity in Uganda.
Hopkins, Heidi; Bebell, Lisa; Kambale, Wilson; Dokomajilar, Christian; Rosenthal, Philip J; Dorsey, Grant
2008-02-15
In Africa, fever is often treated presumptively as malaria, resulting in misdiagnosis and the overuse of antimalarial drugs. Rapid diagnostic tests (RDTs) for malaria may allow improved fever management. We compared RDTs based on histidine-rich protein 2 (HRP2) and RDTs based on Plasmodium lactate dehydrogenase (pLDH) with expert microscopy and PCR-corrected microscopy for 7000 patients at sites of varying malaria transmission intensity across Uganda. When all sites were considered, the sensitivity of the HRP2-based test was 97% when compared with microscopy and 98% when corrected by PCR; the sensitivity of the pLDH-based test was 88% when compared with microscopy and 77% when corrected by PCR. The specificity of the HRP2-based test was 71% when compared with microscopy and 88% when corrected by PCR; the specificity of the pLDH-based test was 92% when compared with microscopy and >98% when corrected by PCR. Based on Plasmodium falciparum PCR-corrected microscopy, the positive predictive value (PPV) of the HRP2-based test was high (93%) at all but the site with the lowest transmission rate; the pLDH-based test and expert microscopy offered excellent PPVs (98%) for all sites. The negative predictive value (NPV) of the HRP2-based test was consistently high (>97%); in contrast, the NPV for the pLDH-based test dropped significantly (from 98% to 66%) as transmission intensity increased, and the NPV for expert microscopy decreased significantly (99% to 54%) because of increasing failure to detect subpatent parasitemia. Based on the high PPV and NPV, HRP2-based RDTs are likely to be the best diagnostic choice for areas with medium-to-high malaria transmission rates in Africa.
Proof-test-based life prediction of high-toughness pressure vessels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panontin, T.L.; Hill, M.R.
1996-02-01
The paper examines the problems associated with applying proof-test-based life prediction to vessels made of high-toughness metals. Two A106 Gr B pipe specimens containing long, through-wall circumferential flaws were tested. One failed during hydrostatic testing and the other during tension-tension cycling following a hydrostatic test. Quantitative fractography was used to verify experimentally obtained fatigue crack growth rates and a variety of LEFM and EPFM techniques were used to analyze the experimental results. The results show that: plastic collapse analysis provides accurate predictions of screened (initial) crack size when the flow stress is determined experimentally; LEFM analysis underestimates the crack sizemore » screened by the proof test and overpredicts the subsequent fatigue life of the vessel when retardation effects are small (i.e., low proof levels); and, at a high proof-test level (2.4 {times} operating pressure), the large retardation effect on fatigue crack growth due to the overload overwhelmed the deleterious effect on fatigue life from stable tearing during the proof test and alleviated the problem of screening only long cracks due to the high toughness of the metal.« less
Stefanidis, Dimitrios; Korndorffer, James R; Black, F William; Dunne, J Bruce; Sierra, Rafael; Touchard, Cheri L; Rice, David A; Markert, Ronald J; Kastl, Peter R; Scott, Daniel J
2006-08-01
Laparoscopic simulator training translates into improved operative performance. Proficiency-based curricula maximize efficiency by tailoring training to meet the needs of each individual; however, because rates of skill acquisition vary widely, such curricula may be difficult to implement. We hypothesized that psychomotor testing would predict baseline performance and training duration in a proficiency-based laparoscopic simulator curriculum. Residents (R1, n = 20) were enrolled in an IRB-approved prospective study at the beginning of the academic year. All completed the following: a background information survey, a battery of 12 innate ability measures (5 motor, and 7 visual-spatial), and baseline testing on 3 validated simulators (5 videotrainer [VT] tasks, 12 virtual reality [minimally invasive surgical trainer-virtual reality, MIST-VR] tasks, and 2 laparoscopic camera navigation [LCN] tasks). Participants trained to proficiency, and training duration and number of repetitions were recorded. Baseline test scores were correlated to skill acquisition rate. Cutoff scores for each predictive test were calculated based on a receiver operator curve, and their sensitivity and specificity were determined in identifying slow learners. Only the Cards Rotation test correlated with baseline simulator ability on VT and LCN. Curriculum implementation required 347 man-hours (6-person team) and 795,000 dollars of capital equipment. With an attendance rate of 75%, 19 of 20 residents (95%) completed the curriculum by the end of the academic year. To complete training, a median of 12 hours (range, 5.5-21), and 325 repetitions (range, 171-782) were required. Simulator score improvement was 50%. Training duration and repetitions correlated with prior video game and billiard exposure, grooved pegboard, finger tap, map planning, Rey Figure Immediate Recall score, and baseline performance on VT and LCN. The map planning cutoff score proved most specific in identifying slow learners. Proficiency-based laparoscopic simulator training provides improvement in performance and can be effectively implemented as a routine part of resident education, but may require significant resources. Although psychomotor testing may be of limited value in the prediction of baseline laparoscopic performance, its importance may lie in the prediction of the rapidity of skill acquisition. These tests may be useful in optimizing curricular design by allowing the tailoring of training to individual needs.
Drug Distribution. Part 1. Models to Predict Membrane Partitioning.
Nagar, Swati; Korzekwa, Ken
2017-03-01
Tissue partitioning is an important component of drug distribution and half-life. Protein binding and lipid partitioning together determine drug distribution. Two structure-based models to predict partitioning into microsomal membranes are presented. An orientation-based model was developed using a membrane template and atom-based relative free energy functions to select drug conformations and orientations for neutral and basic drugs. The resulting model predicts the correct membrane positions for nine compounds tested, and predicts the membrane partitioning for n = 67 drugs with an average fold-error of 2.4. Next, a more facile descriptor-based model was developed for acids, neutrals and bases. This model considers the partitioning of neutral and ionized species at equilibrium, and can predict membrane partitioning with an average fold-error of 2.0 (n = 92 drugs). Together these models suggest that drug orientation is important for membrane partitioning and that membrane partitioning can be well predicted from physicochemical properties.
Pretreatment data is highly predictive of liver chemistry signals in clinical trials.
Cai, Zhaohui; Bresell, Anders; Steinberg, Mark H; Silberg, Debra G; Furlong, Stephen T
2012-01-01
The goal of this retrospective analysis was to assess how well predictive models could determine which patients would develop liver chemistry signals during clinical trials based on their pretreatment (baseline) information. Based on data from 24 late-stage clinical trials, classification models were developed to predict liver chemistry outcomes using baseline information, which included demographics, medical history, concomitant medications, and baseline laboratory results. Predictive models using baseline data predicted which patients would develop liver signals during the trials with average validation accuracy around 80%. Baseline levels of individual liver chemistry tests were most important for predicting their own elevations during the trials. High bilirubin levels at baseline were not uncommon and were associated with a high risk of developing biochemical Hy's law cases. Baseline γ-glutamyltransferase (GGT) level appeared to have some predictive value, but did not increase predictability beyond using established liver chemistry tests. It is possible to predict which patients are at a higher risk of developing liver chemistry signals using pretreatment (baseline) data. Derived knowledge from such predictions may allow proactive and targeted risk management, and the type of analysis described here could help determine whether new biomarkers offer improved performance over established ones.
NASA Astrophysics Data System (ADS)
Sadi, Maryam
2018-01-01
In this study a group method of data handling model has been successfully developed to predict heat capacity of ionic liquid based nanofluids by considering reduced temperature, acentric factor and molecular weight of ionic liquids, and nanoparticle concentration as input parameters. In order to accomplish modeling, 528 experimental data points extracted from the literature have been divided into training and testing subsets. The training set has been used to predict model coefficients and the testing set has been applied for model validation. The ability and accuracy of developed model, has been evaluated by comparison of model predictions with experimental values using different statistical parameters such as coefficient of determination, mean square error and mean absolute percentage error. The mean absolute percentage error of developed model for training and testing sets are 1.38% and 1.66%, respectively, which indicate excellent agreement between model predictions and experimental data. Also, the results estimated by the developed GMDH model exhibit a higher accuracy when compared to the available theoretical correlations.
Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L
2017-01-01
Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. PMID:27193033
Donà, Valentina; Low, Nicola; Golparian, Daniel; Unemo, Magnus
2017-09-01
The number of genetic tests, mostly real-time PCRs, to detect antimicrobial resistance (AMR) determinants and predict AMR in Neisseria gonorrhoeae is increasing. Several of these assays are promising, but there are important shortcomings and few assays have been adequately validated and quality assured. Areas covered: Recent advances, focusing on publications since 2012, in the development and use of molecular tests to predict gonococcal AMR for surveillance and for clinical use, advantages and disadvantages of these tests and of molecular AMR prediction compared with phenotypic AMR testing, and future perspectives for effective use of molecular AMR tests for different purposes. Expert commentary: Several challenges for direct testing of clinical, especially extra-genital, specimens remain. The choice of molecular assay needs to consider the assay target, quality controls, sample types, limitations intrinsic to molecular technologies, and specific to the chosen methodology, and the intended use of the test. Improved molecular- and particularly genome-sequencing-based methods will supplement AMR testing for surveillance purposes, and translate into point-of-care tests that will lead to personalized treatments, while sparing the last available empiric treatment option (ceftriaxone). However, genetic AMR prediction will never completely replace phenotypic AMR testing, which detects also AMR due to unknown AMR determinants.
Within the field of chemical safety assessment, there is a desire to replace costly whole organism testing with more efficient and cost-effective alternatives based on in vitro test systems. Disruption of thyroid hormone signaling via inhibition of enzymes called deiodinases is o...
NASA Technical Reports Server (NTRS)
Wells, Jason E.; Black, David L.; Taylor, Casey L.
2013-01-01
Exhaust plumes from large solid rocket motors fired at ATK's Promontory test site carry particulates to high altitudes and typically produce deposits that fall on regions downwind of the test area. As populations and communities near the test facility grow, ATK has become increasingly concerned about the impact of motor testing on those surrounding communities. To assess the potential impact of motor testing on the community and to identify feasible mitigation strategies, it is essential to have a tool capable of predicting plume behavior downrange of the test stand. A software package, called PlumeTracker, has been developed and validated at ATK for this purpose. The code is a point model that offers a time-dependent, physics-based description of plume transport and precipitation. The code can utilize either measured or forecasted weather data to generate plume predictions. Next-Generation Radar (NEXRAD) data and field observations from twenty-three historical motor test fires at Promontory were collected to test the predictive capability of PlumeTracker. Model predictions for plume trajectories and deposition fields were found to correlate well with the collected dataset.
ERIC Educational Resources Information Center
Ivancevich, John M.
1976-01-01
This empirically based study of 324 technicians investigated the moderating impact of job satisfaction in the prediction of job performance criteria from ability test scores. The findings suggest that the type of job satisfaction facet and the performance criterion used are important considerations when examining satisfaction as a moderator.…
ERIC Educational Resources Information Center
Demir, Metin
2015-01-01
This study predicts the number of correct answers given by pre-service classroom teachers in Civil Servant Recruitment Examination's (CSRE) educational sciences test based on their high school grade point averages, university entrance scores, and grades (mid-term and final exams) from their undergraduate educational courses. This study was…
Testing and Extending VSEPR with WebMO and MOPAC or GAMESS
ERIC Educational Resources Information Center
McNaught, Ian J.
2011-01-01
VSEPR is a topic that is commonly taught in undergraduate chemistry courses. The readily available Web-based program WebMO, in conjunction with the computational chemistry programs MOPAC and GAMESS, is used to quantitatively test a wide range of predictions of VSEPR. These predictions refer to the point group of the molecule, including the…
Developing Local Oral Reading Fluency Cut Scores for Predicting High-Stakes Test Performance
ERIC Educational Resources Information Center
Grapin, Sally L.; Kranzler, John H.; Waldron, Nancy; Joyce-Beaulieu, Diana; Algina, James
2017-01-01
This study evaluated the classification accuracy of a second grade oral reading fluency curriculum-based measure (R-CBM) in predicting third grade state test performance. It also compared the long-term classification accuracy of local and publisher-recommended R-CBM cut scores. Participants were 266 students who were divided into a calibration…
ERIC Educational Resources Information Center
Kang, Okim; Thomson, Ron I.; Moran, Meghan
2018-01-01
This study compared five research-based intelligibility measures as they were applied to six varieties of English. The objective was to determine which approach to measuring intelligibility would be most reliable for predicting listener comprehension, as measured through a listening comprehension test similar to the Test of English as a Foreign…
Mueller, Stefan O; Dekant, Wolfgang; Jennings, Paul; Testai, Emanuela; Bois, Frederic
2015-12-25
This special issue of Toxicology in Vitro is dedicated to disseminating the results of the EU-funded collaborative project "Profiling the toxicity of new drugs: a non animal-based approach integrating toxicodynamics and biokinetics" (Predict-IV; Grant 202222). The project's overall aim was to develop strategies to improve the assessment of drug safety in the early stage of development and late discovery phase, by an intelligent combination of non animal-based test systems, cell biology, mechanistic toxicology and in silico modeling, in a rapid and cost effective manner. This overview introduces the scope and overall achievements of Predict-IV. Copyright © 2014 Elsevier Ltd. All rights reserved.
Descent Advisor Preliminary Field Test
NASA Technical Reports Server (NTRS)
Green, Steven M.; Vivona, Robert A.; Sanford, Beverly
1995-01-01
A field test of the Descent Advisor (DA) automation tool was conducted at the Denver Air Route Traffic Control Center in September 1994. DA is being developed to assist Center controllers in the efficient management and control of arrival traffic. DA generates advisories, based on trajectory predictions, to achieve accurate meter-fix arrival times in a fuel efficient manner while assisting the controller with the prediction and resolution of potential conflicts. The test objectives were: (1) to evaluate the accuracy of DA trajectory predictions for conventional and flight-management system equipped jet transports, (2) to identify significant sources of trajectory prediction error, and (3) to investigate procedural and training issues (both air and ground) associated with DA operations. Various commercial aircraft (97 flights total) and a Boeing 737-100 research aircraft participated in the test. Preliminary results from the primary test set of 24 commercial flights indicate a mean DA arrival time prediction error of 2.4 seconds late with a standard deviation of 13.1 seconds. This paper describes the field test and presents preliminary results for the commercial flights.
NASA Astrophysics Data System (ADS)
Noble, Clifford Elliott, II
2002-09-01
The problem. The purpose of this study was to investigate the ability of three single-task instruments---(a) the Test of English as a Foreign Language, (b) the Aviation Test of Spoken English, and (c) the Single Manual-Tracking Test---and three dual-task instruments---(a) the Concurrent Manual-Tracking and Communication Test, (b) the Certified Flight Instructor's Test, and (c) the Simulation-Based English Test---to predict the language performance of 10 Chinese student pilots speaking English as a second language when operating single-engine and multiengine aircraft within American airspace. Method. This research implemented a correlational design to investigate the ability of the six described instruments to predict the mean score of the criterion evaluation, which was the Examiner's Test. This test assessed the oral communication skill of student pilots on the flight portion of the terminal checkride in the Piper Cadet, Piper Seminole, and Beechcraft King Air airplanes. Results. Data from the Single Manual-Tracking Test, as well as the Concurrent Manual-Tracking and Communication Test, were discarded due to performance ceiling effects. Hypothesis 1, which stated that the average correlation between the mean scores of the dual-task evaluations and that of the Examiner's Test would predict the mean score of the criterion evaluation with a greater degree of accuracy than that of single-task evaluations, was not supported. Hypothesis 2, which stated that the correlation between the mean scores of the participants on the Simulation-Based English Test and the Examiner's Test would predict the mean score of the criterion evaluation with a greater degree of accuracy than that of all single- and dual-task evaluations, was also not supported. The findings suggest that single- and dual-task assessments administered after initial flight training are equivalent predictors of language performance when piloting single-engine and multiengine aircraft.
NASA Astrophysics Data System (ADS)
Hardikar, Kedar Y.; Liu, Bill J. J.; Bheemreddy, Venkata
2016-09-01
Gaining an understanding of degradation mechanisms and their characterization are critical in developing relevant accelerated tests to ensure PV module performance warranty over a typical lifetime of 25 years. As newer technologies are adapted for PV, including new PV cell technologies, new packaging materials, and newer product designs, the availability of field data over extended periods of time for product performance assessment cannot be expected within the typical timeframe for business decisions. In this work, to enable product design decisions and product performance assessment for PV modules utilizing newer technologies, Simulation and Mechanism based Accelerated Reliability Testing (SMART) methodology and empirical approaches to predict field performance from accelerated test results are presented. The method is demonstrated for field life assessment of flexible PV modules based on degradation mechanisms observed in two accelerated tests, namely, Damp Heat and Thermal Cycling. The method is based on design of accelerated testing scheme with the intent to develop relevant acceleration factor models. The acceleration factor model is validated by extensive reliability testing under different conditions going beyond the established certification standards. Once the acceleration factor model is validated for the test matrix a modeling scheme is developed to predict field performance from results of accelerated testing for particular failure modes of interest. Further refinement of the model can continue as more field data becomes available. While the demonstration of the method in this work is for thin film flexible PV modules, the framework and methodology can be adapted to other PV products.
Evaluation of a deep learning architecture for MR imaging prediction of ATRX in glioma patients
NASA Astrophysics Data System (ADS)
Korfiatis, Panagiotis; Kline, Timothy L.; Erickson, Bradley J.
2018-02-01
Predicting mutation/loss of alpha-thalassemia/mental retardation syndrome X-linked (ATRX) gene utilizing MR imaging is of high importance since it is a predictor of response and prognosis in brain tumors. In this study, we compare a deep neural network approach based on a residual deep neural network (ResNet) architecture and one based on a classical machine learning approach and evaluate their ability in predicting ATRX mutation status without the need for a distinct tumor segmentation step. We found that the ResNet50 (50 layers) architecture, pre trained on ImageNet data was the best performing model, achieving an accuracy of 0.91 for the test set (classification of a slice as no tumor, ATRX mutated, or mutated) in terms of f1 score in a test set of 35 cases. The SVM classifier achieved 0.63 for differentiating the Flair signal abnormality regions from the test patients based on their mutation status. We report a method that alleviates the need for extensive preprocessing and acts as a proof of concept that deep neural network architectures can be used to predict molecular biomarkers from routine medical images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miltiadis Alamaniotis; Vivek Agarwal
This paper places itself in the realm of anticipatory systems and envisions monitoring and control methods being capable of making predictions over system critical parameters. Anticipatory systems allow intelligent control of complex systems by predicting their future state. In the current work, an intelligent model aimed at implementing anticipatory monitoring and control in energy industry is presented and tested. More particularly, a set of support vector regressors (SVRs) are trained using both historical and observed data. The trained SVRs are used to predict the future value of the system based on current operational system parameter. The predicted values are thenmore » inputted to a fuzzy logic based module where the values are fused to obtain a single value, i.e., final system output prediction. The methodology is tested on real turbine degradation datasets. The outcome of the approach presented in this paper highlights the superiority over single support vector regressors. In addition, it is shown that appropriate selection of fuzzy sets and fuzzy rules plays an important role in improving system performance.« less
Buvé, Carolien; Van Bedts, Tine; Haenen, Annelien; Kebede, Biniam; Braekers, Roel; Hendrickx, Marc; Van Loey, Ann; Grauwet, Tara
2018-07-01
Accurate shelf-life dating of food products is crucial for consumers and industries. Therefore, in this study we applied a science-based approach for shelf-life assessment, including accelerated shelf-life testing (ASLT), acceptability testing and the screening of analytical attributes for fast shelf-life predictions. Shelf-stable strawberry juice was selected as a case study. Ambient storage (20 °C) had no effect on the aroma-based acceptance of strawberry juice. The colour-based acceptability decreased during storage under ambient and accelerated (28-42 °C) conditions. The application of survival analysis showed that the colour-based shelf-life was reached in the early stages of storage (≤11 weeks) and that the shelf-life was shortened at higher temperatures. None of the selected attributes (a * and ΔE * value, anthocyanin and ascorbic acid content) is an ideal analytical marker for shelf-life predictions in the investigated temperature range (20-42 °C). Nevertheless, an overall analytical cut-off value over the whole temperature range can be selected. Colour changes of strawberry juice during storage are shelf-life limiting. Combining ASLT with acceptability testing allowed to gain faster insight into the change in colour-based acceptability and to perform shelf-life predictions relying on scientific data. An analytical marker is a convenient tool for shelf-life predictions in the context of ASLT. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.
A deep learning-based multi-model ensemble method for cancer prediction.
Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong
2018-01-01
Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.
De Buck, Stefan S; Sinha, Vikash K; Fenu, Luca A; Nijsen, Marjoleen J; Mackie, Claire E; Gilissen, Ron A H J
2007-10-01
The aim of this study was to evaluate different physiologically based modeling strategies for the prediction of human pharmacokinetics. Plasma profiles after intravenous and oral dosing were simulated for 26 clinically tested drugs. Two mechanism-based predictions of human tissue-to-plasma partitioning (P(tp)) from physicochemical input (method Vd1) were evaluated for their ability to describe human volume of distribution at steady state (V(ss)). This method was compared with a strategy that combined predicted and experimentally determined in vivo rat P(tp) data (method Vd2). Best V(ss) predictions were obtained using method Vd2, providing that rat P(tp) input was corrected for interspecies differences in plasma protein binding (84% within 2-fold). V(ss) predictions from physicochemical input alone were poor (32% within 2-fold). Total body clearance (CL) was predicted as the sum of scaled rat renal clearance and hepatic clearance projected from in vitro metabolism data. Best CL predictions were obtained by disregarding both blood and microsomal or hepatocyte binding (method CL2, 74% within 2-fold), whereas strong bias was seen using both blood and microsomal or hepatocyte binding (method CL1, 53% within 2-fold). The physiologically based pharmacokinetics (PBPK) model, which combined methods Vd2 and CL2 yielded the most accurate predictions of in vivo terminal half-life (69% within 2-fold). The Gastroplus advanced compartmental absorption and transit model was used to construct an absorption-disposition model and provided accurate predictions of area under the plasma concentration-time profile, oral apparent volume of distribution, and maximum plasma concentration after oral dosing, with 74%, 70%, and 65% within 2-fold, respectively. This evaluation demonstrates that PBPK models can lead to reasonable predictions of human pharmacokinetics.
Electrical test prediction using hybrid metrology and machine learning
NASA Astrophysics Data System (ADS)
Breton, Mary; Chao, Robin; Muthinti, Gangadhara Raja; de la Peña, Abraham A.; Simon, Jacques; Cepler, Aron J.; Sendelbach, Matthew; Gaudiello, John; Emans, Susan; Shifrin, Michael; Etzioni, Yoav; Urenski, Ronen; Lee, Wei Ti
2017-03-01
Electrical test measurement in the back-end of line (BEOL) is crucial for wafer and die sorting as well as comparing intended process splits. Any in-line, nondestructive technique in the process flow to accurately predict these measurements can significantly improve mean-time-to-detect (MTTD) of defects and improve cycle times for yield and process learning. Measuring after BEOL metallization is commonly done for process control and learning, particularly with scatterometry (also called OCD (Optical Critical Dimension)), which can solve for multiple profile parameters such as metal line height or sidewall angle and does so within patterned regions. This gives scatterometry an advantage over inline microscopy-based techniques, which provide top-down information, since such techniques can be insensitive to sidewall variations hidden under the metal fill of the trench. But when faced with correlation to electrical test measurements that are specific to the BEOL processing, both techniques face the additional challenge of sampling. Microscopy-based techniques are sampling-limited by their small probe size, while scatterometry is traditionally limited (for microprocessors) to scribe targets that mimic device ground rules but are not necessarily designed to be electrically testable. A solution to this sampling challenge lies in a fast reference-based machine learning capability that allows for OCD measurement directly of the electrically-testable structures, even when they are not OCD-compatible. By incorporating such direct OCD measurements, correlation to, and therefore prediction of, resistance of BEOL electrical test structures is significantly improved. Improvements in prediction capability for multiple types of in-die electrically-testable device structures is demonstrated. To further improve the quality of the prediction of the electrical resistance measurements, hybrid metrology using the OCD measurements as well as X-ray metrology (XRF) is used. Hybrid metrology is the practice of combining information from multiple sources in order to enable or improve the measurement of one or more critical parameters. Here, the XRF measurements are used to detect subtle changes in barrier layer composition and thickness that can have second-order effects on the electrical resistance of the test structures. By accounting for such effects with the aid of the X-ray-based measurements, further improvement in the OCD correlation to electrical test measurements is achieved. Using both types of solution incorporation of fast reference-based machine learning on nonOCD-compatible test structures, and hybrid metrology combining OCD with XRF technology improvement in BEOL cycle time learning could be accomplished through improved prediction capability.
NEL, ANDRE; XIA, TIAN; MENG, HUAN; WANG, XIANG; LIN, SIJIE; JI, ZHAOXIA; ZHANG, HAIYUAN
2014-01-01
Conspectus The production of engineered nanomaterials (ENMs) is a scientific breakthrough in material design and the development of new consumer products. While the successful implementation of nanotechnology is important for the growth of the global economy, we also need to consider the possible environmental health and safety (EHS) impact as a result of the novel physicochemical properties that could generate hazardous biological outcomes. In order to assess ENM hazard, reliable and reproducible screening approaches are needed to test the basic materials as well as nano-enabled products. A platform is required to investigate the potentially endless number of bio-physicochemical interactions at the nano/bio interface, in response to which we have developed a predictive toxicological approach. We define a predictive toxicological approach as the use of mechanisms-based high throughput screening in vitro to make predictions about the physicochemical properties of ENMs that may lead to the generation of pathology or disease outcomes in vivo. The in vivo results are used to validate and improve the in vitro high throughput screening (HTS) and to establish structure-activity relationships (SARs) that allow hazard ranking and modeling by an appropriate combination of in vitro and in vivo testing. This notion is in agreement with the landmark 2007 report from the US National Academy of Sciences, “Toxicity Testing in the 21st Century: A Vision and a Strategy” (http://www.nap.edu/catalog.php?record_id=11970), which advocates increased efficiency of toxicity testing by transitioning from qualitative, descriptive animal testing to quantitative, mechanistic and pathway-based toxicity testing in human cells or cell lines using high throughput approaches. Accordingly, we have implemented HTS approaches to screen compositional and combinatorial ENM libraries to develop hazard ranking and structure-activity relationships that can be used for predicting in vivo injury outcomes. This predictive approach allows the bulk of the screening analysis and high volume data generation to be carried out in vitro, following which limited, but critical, validation studies are carried out in animals or whole organisms. Risk reduction in the exposed human or environmental populations can then focus on limiting or avoiding exposures that trigger these toxicological responses as well as implementing safer design of potentially hazardous ENMs. In this communication, we review the tools required for establishing predictive toxicology paradigms to assess inhalation and environmental toxicological scenarios through the use of compositional and combinatorial ENM libraries, mechanism-based HTS assays, hazard ranking and development of nano-SARs. We will discuss the major injury paradigms that have emerged based on specific ENM properties, as well as describing the safer design of ZnO nanoparticles based on characterization of dissolution chemistry as a major predictor of toxicity. PMID:22676423
Low, Yee Syuen; Blöcker, Christopher; McPherson, John R; Tang, See Aik; Cheng, Ying Ying; Wong, Joyner Y S; Chua, Clarinda; Lim, Tony K H; Tang, Choong Leong; Chew, Min Hoe; Tan, Patrick; Tan, Iain B; Rozen, Steven G; Cheah, Peh Yean
2017-09-10
Approximately 20% early-stage (I/II) colorectal cancer (CRC) patients develop metastases despite curative surgery. We aim to develop a formalin-fixed and paraffin-embedded (FFPE)-based predictor of metastases in early-stage, clinically-defined low risk, microsatellite-stable (MSS) CRC patients. We considered genome-wide mRNA and miRNA expression and mutation status of 20 genes assayed in 150 fresh-frozen tumours with known metastasis status. We selected 193 genes for further analysis using NanoString nCounter arrays on corresponding FFPE tumours. Neither mutation status nor miRNA expression improved the estimated prediction. The final predictor, ColoMet19, based on the top 19 genes' mRNA levels trained by Random Forest machine-learning strategy, had an estimated positive-predictive-value (PPV) of 0.66. We tested ColoMet19 on an independent test-set of 131 tumours and obtained a population-adjusted PPV of 0.67 indicating that early-stage CRC patients who tested positive have a 67% risk of developing metastases, substantially higher than the metastasis risk of 40% for node-positive (Stage III) patients who are generally treated with chemotherapy. Predicted-positive patients also had poorer metastasis-free survival (hazard ratios [HR] = 1.92, design-set; HR = 2.05, test-set). Thus, early-stage CRC patients who test positive may be considered for adjuvant therapy after surgery. Copyright © 2017 Elsevier B.V. All rights reserved.
TH-A-9A-01: Active Optical Flow Model: Predicting Voxel-Level Dose Prediction in Spine SBRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, J; Wu, Q.J.; Yin, F
2014-06-15
Purpose: To predict voxel-level dose distribution and enable effective evaluation of cord dose sparing in spine SBRT. Methods: We present an active optical flow model (AOFM) to statistically describe cord dose variations and train a predictive model to represent correlations between AOFM and PTV contours. Thirty clinically accepted spine SBRT plans are evenly divided into training and testing datasets. The development of predictive model consists of 1) collecting a sequence of dose maps including PTV and OAR (spinal cord) as well as a set of associated PTV contours adjacent to OAR from the training dataset, 2) classifying data into fivemore » groups based on PTV's locations relative to OAR, two “Top”s, “Left”, “Right”, and “Bottom”, 3) randomly selecting a dose map as the reference in each group and applying rigid registration and optical flow deformation to match all other maps to the reference, 4) building AOFM by importing optical flow vectors and dose values into the principal component analysis (PCA), 5) applying another PCA to features of PTV and OAR contours to generate an active shape model (ASM), and 6) computing a linear regression model of correlations between AOFM and ASM.When predicting dose distribution of a new case in the testing dataset, the PTV is first assigned to a group based on its contour characteristics. Contour features are then transformed into ASM's principal coordinates of the selected group. Finally, voxel-level dose distribution is determined by mapping from the ASM space to the AOFM space using the predictive model. Results: The DVHs predicted by the AOFM-based model and those in clinical plans are comparable in training and testing datasets. At 2% volume the dose difference between predicted and clinical plans is 4.2±4.4% and 3.3±3.5% in the training and testing datasets, respectively. Conclusion: The AOFM is effective in predicting voxel-level dose distribution for spine SBRT. Partially supported by NIH/NCI under grant #R21CA161389 and a master research grant by Varian Medical System.« less
Chuke, Stella O.; Yen, Nguyen Thi Ngoc; Laserson, Kayla F.; Phuoc, Nguyen Huu; Trinh, Nguyen An; Nhung, Duong Thi Cam; Mai, Vo Thi Chi; Qui, An Dang; Hai, Hoang Hoa; Loan, Le Thien Huong; Jones, Warren G.; Whitworth, William C.; Shah, J. Jina; Painter, John A.; Mazurek, Gerald H.; Maloney, Susan A.
2014-01-01
Objective. Use of tuberculin skin tests (TSTs) and interferon gamma release assays (IGRAs) as part of tuberculosis (TB) screening among immigrants from high TB-burden countries has not been fully evaluated. Methods. Prevalence of Mycobacterium tuberculosis infection (MTBI) based on TST, or the QuantiFERON-TB Gold test (QFT-G), was determined among immigrant applicants in Vietnam bound for the United States (US); factors associated with test results and discordance were assessed; predictive values of TST and QFT-G for identifying chest radiographs (CXRs) consistent with TB were calculated. Results. Of 1,246 immigrant visa applicants studied, 57.9% were TST positive, 28.3% were QFT-G positive, and test agreement was 59.4%. Increasing age was associated with positive TST results, positive QFT-G results, TST-positive but QFT-G-negative discordance, and abnormal CXRs consistent with TB. Positive predictive values of TST and QFT-G for an abnormal CXR were 25.9% and 25.6%, respectively. Conclusion. The estimated prevalence of MTBI among US-bound visa applicants in Vietnam based on TST was twice that based on QFT-G, and 14 times higher than a TST-based estimate of MTBI prevalence reported for the general US population in 2000. QFT-G was not better than TST at predicting abnormal CXRs consistent with TB. PMID:24738031
[Predictive model based multimetric index of macroinvertebrates for river health assessment].
Chen, Kai; Yu, Hai Yan; Zhang, Ji Wei; Wang, Bei Xin; Chen, Qiu Wen
2017-06-18
Improving the stability of integrity of biotic index (IBI; i.e., multi-metric indices, MMI) across temporal and spatial scales is one of the most important issues in water ecosystem integrity bioassessment and water environment management. Using datasets of field-based macroinvertebrate and physicochemical variables and GIS-based natural predictors (e.g., geomorphology and climate) and land use variables collected at 227 river sites from 2004 to 2011 across the Zhejiang Province, China, we used random forests (RF) to adjust the effects of natural variations at temporal and spatial scales on macroinvertebrate metrics. We then developed natural variations adjusted (predictive) and unadjusted (null) MMIs and compared performance between them. The core me-trics selected for predictive and null MMIs were different from each other, and natural variations within core metrics in predictive MMI explained by RF models ranged between 11.4% and 61.2%. The predictive MMI was more precise and accurate, but less responsive and sensitive than null MMI. The multivariate nearest-neighbor test determined that 9 test sites and 1 most degraded site were flagged outside of the environmental space of the reference site network. We found that combination of predictive MMI developed by using predictive model and the nearest-neighbor test performed best and decreased risks of inferring type I (designating a water body as being in poor biological condition, when it was actually in good condition) and type II (designating a water body as being in good biological condition, when it was actually in poor condition) errors. Our results provided an effective method to improve the stability and performance of integrity of biotic index.
Clothier, Richard; Starzec, Gemma; Pradel, Lionel; Baxter, Victoria; Jones, Melanie; Cox, Helen; Noble, Linda
2002-01-01
A range of cosmetics formulations with human patch-test data were supplied in a coded form, for the examination of the use of a combined in vitro permeability barrier assay and cell viability assay to generate, and then test, a prediction model for assessing potential human skin patch-test results. The target cells employed were of the Madin Darby canine kidney cell line, which establish tight junctions and adherens junctions able to restrict the permeability of sodium fluorescein across the barrier of the confluent cell layer. The prediction model for interpretation of the in vitro assay results included initial effects and the recovery profile over 72 hours. A set of the hand-wash, surfactant-based formulations were tested to generate the prediction model, and then six others were evaluated. The model system was then also evaluated with powder laundry detergents and hand moisturisers: their effects were predicted by the in vitro test system. The model was under-predictive for two of the ten hand-wash products. It was over-predictive for the moisturisers, (two out of six) and eight out of ten laundry powders. However, the in vivo human patch test data were variable, and 19 of the 26 predictions were correct or within 0.5 on the 0-4.0 scale used for the in vivo scores, i.e. within the same variable range reported for the repeat-test hand-wash in vivo data.
NASA Astrophysics Data System (ADS)
Rooper, Christopher N.; Zimmermann, Mark; Prescott, Megan M.
2017-08-01
Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska's marine waters, and are associated with many different species of fishes and invertebrates. These ecosystems are vulnerable to the effects of commercial fishing activities and climate change. We compared four commonly used species distribution models (general linear models, generalized additive models, boosted regression trees and random forest models) and an ensemble model to predict the presence or absence and abundance of six groups of benthic invertebrate taxa in the Gulf of Alaska. All four model types performed adequately on training data for predicting presence and absence, with regression forest models having the best overall performance measured by the area under the receiver-operating-curve (AUC). The models also performed well on the test data for presence and absence with average AUCs ranging from 0.66 to 0.82. For the test data, ensemble models performed the best. For abundance data, there was an obvious demarcation in performance between the two regression-based methods (general linear models and generalized additive models), and the tree-based models. The boosted regression tree and random forest models out-performed the other models by a wide margin on both the training and testing data. However, there was a significant drop-off in performance for all models of invertebrate abundance ( 50%) when moving from the training data to the testing data. Ensemble model performance was between the tree-based and regression-based methods. The maps of predictions from the models for both presence and abundance agreed very well across model types, with an increase in variability in predictions for the abundance data. We conclude that where data conforms well to the modeled distribution (such as the presence-absence data and binomial distribution in this study), the four types of models will provide similar results, although the regression-type models may be more consistent with biological theory. For data with highly zero-inflated distributions and non-normal distributions such as the abundance data from this study, the tree-based methods performed better. Ensemble models that averaged predictions across the four model types, performed better than the GLM or GAM models but slightly poorer than the tree-based methods, suggesting ensemble models might be more robust to overfitting than tree methods, while mitigating some of the disadvantages in predictive performance of regression methods.
Li, Zhigang; Liu, Weiguo; Zhang, Jinhuan; Hu, Jingwen
2015-09-01
Skull fracture is one of the most common pediatric traumas. However, injury assessment tools for predicting pediatric skull fracture risk is not well established mainly due to the lack of cadaver tests. Weber conducted 50 pediatric cadaver drop tests for forensic research on child abuse in the mid-1980s (Experimental studies of skull fractures in infants, Z Rechtsmed. 92: 87-94, 1984; Biomechanical fragility of the infant skull, Z Rechtsmed. 94: 93-101, 1985). To our knowledge, these studies contained the largest sample size among pediatric cadaver tests in the literature. However, the lack of injury measurements limited their direct application in investigating pediatric skull fracture risks. In this study, 50 pediatric cadaver tests from Weber's studies were reconstructed using a parametric pediatric head finite element (FE) model which were morphed into subjects with ages, head sizes/shapes, and skull thickness values that reported in the tests. The skull fracture risk curves for infants from 0 to 9 months old were developed based on the model-predicted head injury measures through logistic regression analysis. It was found that the model-predicted stress responses in the skull (maximal von Mises stress, maximal shear stress, and maximal first principal stress) were better predictors than global kinematic-based injury measures (peak head acceleration and head injury criterion (HIC)) in predicting pediatric skull fracture. This study demonstrated the feasibility of using age- and size/shape-appropriate head FE models to predict pediatric head injuries. Such models can account for the morphological variations among the subjects, which cannot be considered by a single FE human model.
NASA Astrophysics Data System (ADS)
Qiu, Yuchen; Wang, Yunzhi; Yan, Shiju; Tan, Maxine; Cheng, Samuel; Liu, Hong; Zheng, Bin
2016-03-01
In order to establish a new personalized breast cancer screening paradigm, it is critically important to accurately predict the short-term risk of a woman having image-detectable cancer after a negative mammographic screening. In this study, we developed and tested a novel short-term risk assessment model based on deep learning method. During the experiment, a number of 270 "prior" negative screening cases was assembled. In the next sequential ("current") screening mammography, 135 cases were positive and 135 cases remained negative. These cases were randomly divided into a training set with 200 cases and a testing set with 70 cases. A deep learning based computer-aided diagnosis (CAD) scheme was then developed for the risk assessment, which consists of two modules: adaptive feature identification module and risk prediction module. The adaptive feature identification module is composed of three pairs of convolution-max-pooling layers, which contains 20, 10, and 5 feature maps respectively. The risk prediction module is implemented by a multiple layer perception (MLP) classifier, which produces a risk score to predict the likelihood of the woman developing short-term mammography-detectable cancer. The result shows that the new CAD-based risk model yielded a positive predictive value of 69.2% and a negative predictive value of 74.2%, with a total prediction accuracy of 71.4%. This study demonstrated that applying a new deep learning technology may have significant potential to develop a new short-term risk predicting scheme with improved performance in detecting early abnormal symptom from the negative mammograms.
NASA Technical Reports Server (NTRS)
Sebok, Angelia; Wickens, Christopher; Sargent, Robert
2015-01-01
One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.
Predicting falls in older adults using the four square step test.
Cleary, Kimberly; Skornyakov, Elena
2017-10-01
The Four Square Step Test (FSST) is a performance-based balance tool involving stepping over four single-point canes placed on the floor in a cross configuration. The purpose of this study was to evaluate properties of the FSST in older adults who lived independently. Forty-five community dwelling older adults provided fall history and completed the FSST, Berg Balance Scale (BBS), Timed Up and Go (TUG), and Tinetti in random order. Future falls were recorded for 12 months following testing. The FSST accurately distinguished between non-fallers and multiple fallers, and the 15-second threshold score accurately distinguished multiple fallers from non-multiple fallers based on fall history. The FSST predicted future falls, and performance on the FSST was significantly correlated with performance on the BBS, TUG, and Tinetti. However, the test is not appropriate for older adults who use walkers. Overall, the FSST is a valid yet underutilized measure of balance performance and fall prediction tool that physical therapists should consider using in ambulatory community dwelling older adults.
Smith, Rebecca L.; Schukken, Ynte H.; Lu, Zhao; Mitchell, Rebecca M.; Grohn, Yrjo T.
2013-01-01
Objective To develop a mathematical model to simulate infection dynamics of Mycobacterium bovis in cattle herds in the United States and predict efficacy of the current national control strategy for tuberculosis in cattle. Design Stochastic simulation model. Sample Theoretical cattle herds in the United States. Procedures A model of within-herd M bovis transmission dynamics following introduction of 1 latently infected cow was developed. Frequency- and density-dependent transmission modes and 3 tuberculin-test based culling strategies (no test-based culling, constant (annual) testing with test-based culling, and the current strategy of slaughterhouse detection-based testing and culling) were investigated. Results were evaluated for 3 herd sizes over a 10-year period and validated via simulation of known outbreaks of M bovis infection. Results On the basis of 1,000 simulations (1000 herds each) at replacement rates typical for dairy cattle (0.33/y), median time to detection of M bovis infection in medium-sized herds (276 adult cattle) via slaughterhouse surveillance was 27 months after introduction, and 58% of these herds would spontaneously clear the infection prior to that time. Sixty-two percent of medium-sized herds without intervention and 99% of those managed with constant test-based culling were predicted to clear infection < 10 years after introduction. The model predicted observed outbreaks best for frequency-dependent transmission, and probability of clearance was most sensitive to replacement rate. Conclusions and Clinical Relevance Although modeling indicated the current national control strategy was sufficient for elimination of M bovis infection from dairy herds after detection, slaughterhouse surveillance was not sufficient to detect M bovis infection in all herds and resulted in subjectively delayed detection, compared with the constant testing method. Further research is required to economically optimize this strategy. PMID:23865885
Lightweight ZERODUR: Validation of Mirror Performance and Mirror Modeling Predictions
NASA Technical Reports Server (NTRS)
Hull, Tony; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron
2017-01-01
Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA's XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2 m diameter, f/1.2988% lightweighted SCHOTT lightweighted ZERODUR(TradeMark) mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR(TradeMark). In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response(dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR(TradeMark) mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS). Summarize the outcome of NASA's XRCF tests and model validations
Lightweight ZERODUR®: Validation of mirror performance and mirror modeling predictions
NASA Astrophysics Data System (ADS)
Hull, Anthony B.; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron
2017-01-01
Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA’s XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2m diameter, f/1.29 88% lightweighted SCHOTT lightweighted ZERODUR® mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR®. In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response (dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR® mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS), and summarize the outcome of NASA’s XRCF tests and model validations.
Kondo, M; Nagao, Y; Mahbub, M H; Tanabe, T; Tanizawa, Y
2018-04-29
To identify factors predicting early postpartum glucose intolerance in Japanese women with gestational diabetes mellitus, using decision-curve analysis. A retrospective cohort study was performed. The participants were 123 Japanese women with gestational diabetes who underwent 75-g oral glucose tolerance tests at 8-12 weeks after delivery. They were divided into a glucose intolerance and a normal glucose tolerance group based on postpartum oral glucose tolerance test results. Analysis of the pregnancy oral glucose tolerance test results showed predictive factors for postpartum glucose intolerance. We also evaluated the clinical usefulness of the prediction model based on decision-curve analysis. Of 123 women, 78 (63.4%) had normoglycaemia and 45 (36.6%) had glucose intolerance. Multivariable logistic regression analysis showed insulinogenic index/fasting immunoreactive insulin and summation of glucose levels, assessed during pregnancy oral glucose tolerance tests (total glucose), to be independent risk factors for postpartum glucose intolerance. Evaluating the regression models, the best discrimination (area under the curve 0.725) was obtained using the basic model (i.e. age, family history of diabetes, BMI ≥25 kg/m 2 and use of insulin during pregnancy) plus insulinogenic index/fasting immunoreactive insulin <1.1. Decision-curve analysis showed that combining insulinogenic index/fasting immunoreactive insulin <1.1 with basic clinical information resulted in superior net benefits for prediction of postpartum glucose intolerance. Insulinogenic index/fasting immunoreactive insulin calculated using oral glucose tolerance test results during pregnancy is potentially useful for predicting early postpartum glucose intolerance in Japanese women with gestational diabetes. © 2018 Diabetes UK.
Diederich, Emily; Thomas, Laura; Mahnken, Jonathan; Lineberry, Matthew
2018-06-01
Within simulation-based mastery learning (SBML) courses, there is inconsistent inclusion of learner pretesting, which requires considerable resources and is contrary to popular instructional frameworks. However, it may have several benefits, including its direct benefit as a form of deliberate practice and its facilitation of more learner-specific subsequent deliberate practice. We consider an unexplored potential benefit of pretesting: its ability to predict variable long-term learner performance. Twenty-seven residents completed an SBML course in central line insertion. Residents were tested on simulated central line insertion precourse, immediately postcourse, and after between 64 and 82 weeks. We analyzed pretest scores' prediction of delayed test scores, above and beyond prediction by program year, line insertion experiences in the interim, and immediate posttest scores. Pretest scores related strongly to delayed test scores (r = 0.59, P = 0.01; disattenuated ρ = 0.75). The number of independent central lines inserted also related to year-delayed test scores (r = 0.44, P = 0.02); other predictors did not discernibly relate. In a regression model jointly predicting delayed test scores, pretest was a significant predictor (β = 0.487, P = 0.011); number of independent insertions was not (β = 0.234, P = 0.198). This study suggests that pretests can play a major role in predicting learner variance in learning gains from SBML courses, thus facilitating more targeted refresher training. It also exposes a risk in SBML courses that learners who meet immediate mastery standards may be incorrectly assumed to have equal long-term learning gains.
A prediction model for colon cancer surveillance data.
Good, Norm M; Suresh, Krithika; Young, Graeme P; Lockett, Trevor J; Macrae, Finlay A; Taylor, Jeremy M G
2015-08-15
Dynamic prediction models make use of patient-specific longitudinal data to update individualized survival probability predictions based on current and past information. Colonoscopy (COL) and fecal occult blood test (FOBT) results were collected from two Australian surveillance studies on individuals characterized as high-risk based on a personal or family history of colorectal cancer. Motivated by a Poisson process, this paper proposes a generalized nonlinear model with a complementary log-log link as a dynamic prediction tool that produces individualized probabilities for the risk of developing advanced adenoma or colorectal cancer (AAC). This model allows predicted risk to depend on a patient's baseline characteristics and time-dependent covariates. Information on the dates and results of COLs and FOBTs were incorporated using time-dependent covariates that contributed to patient risk of AAC for a specified period following the test result. These covariates serve to update a person's risk as additional COL, and FOBT test information becomes available. Model selection was conducted systematically through the comparison of Akaike information criterion. Goodness-of-fit was assessed with the use of calibration plots to compare the predicted probability of event occurrence with the proportion of events observed. Abnormal COL results were found to significantly increase risk of AAC for 1 year following the test. Positive FOBTs were found to significantly increase the risk of AAC for 3 months following the result. The covariates that incorporated the updated test results were of greater significance and had a larger effect on risk than the baseline variables. Copyright © 2015 John Wiley & Sons, Ltd.
Reusable Solid Rocket Motor Nozzle Joint-4 Thermal Analysis
NASA Technical Reports Server (NTRS)
Clayton, J. Louie
2001-01-01
This study provides for development and test verification of a thermal model used for prediction of joint heating environments, structural temperatures and seal erosions in the Space Shuttle Reusable Solid Rocket Motor (RSRM) Nozzle Joint-4. The heating environments are a result of rapid pressurization of the joint free volume assuming a leak path has occurred in the filler material used for assembly gap close out. Combustion gases flow along the leak path from nozzle environment to joint O-ring gland resulting in local heating to the metal housing and erosion of seal materials. Analysis of this condition was based on usage of the NASA Joint Pressurization Routine (JPR) for environment determination and the Systems Improved Numerical Differencing Analyzer (SINDA) for structural temperature prediction. Model generated temperatures, pressures and seal erosions are compared to hot fire test data for several different leak path situations. Investigated in the hot fire test program were nozzle joint-4 O-ring erosion sensitivities to leak path width in both open and confined joint geometries. Model predictions were in generally good agreement with the test data for the confined leak path cases. Worst case flight predictions are provided using the test-calibrated model. Analysis issues are discussed based on model calibration procedures.
Peters, S A; Laham, S M; Pachter, N; Winship, I M
2014-04-01
When clinicians facilitate and patients make decisions about predictive genetic testing, they often base their choices on the predicted emotional consequences of positive and negative test results. Research from psychology and decision making suggests that such predictions may often be biased. Work on affective forecasting-predicting one's future emotional states-shows that people tend to overestimate the impact of (especially negative) emotional events on their well-being; a phenomenon termed the impact bias. In this article, we review the causes and consequences of the impact bias in medical decision making, with a focus on applying such findings to predictive testing in clinical genetics. We also recommend strategies for reducing the impact bias and consider the ethical and practical implications of doing so. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
An Analysis of a Digital Variant of the Trail Making Test Using Machine Learning Techniques
Dahmen, Jessamyn; Cook, Diane; Fellows, Robert; Schmitter-Edgecombe, Maureen
2017-01-01
BACKGROUND The goal of this work is to develop a digital version of a standard cognitive assessment, the Trail Making Test (TMT), and assess its utility. OBJECTIVE This paper introduces a novel digital version of the TMT and introduces a machine learning based approach to assess its capabilities. METHODS Using digital Trail Making Test (dTMT) data collected from (N=54) older adult participants as feature sets, we use machine learning techniques to analyze the utility of the dTMT and evaluate the insights provided by the digital features. RESULTS Predicted TMT scores correlate well with clinical digital test scores (r=0.98) and paper time to completion scores (r=0.65). Predicted TICS exhibited a small correlation with clinically-derived TICS scores (r=0.12 Part A, r=0.10 Part B). Predicted FAB scores exhibited a small correlation with clinically-derived FAB scores (r=0.13 Part A, r=0.29 for Part B). Digitally-derived features were also used to predict diagnosis (AUC of 0.65). CONCLUSION Our findings indicate that the dTMT is capable of measuring the same aspects of cognition as the paper-based TMT. Furthermore, the dTMT’s additional data may be able to help monitor other cognitive processes not captured by the paper-based TMT alone. PMID:27886019
Within the field of chemical safety assessment, there is a desire to replace costly whole organism testing with more efficient and cost-effective alternatives based on in vitro test systems. Disruption of thyroid hormone signaling via inhibition of enzymes called deiodinases is o...
NASA Technical Reports Server (NTRS)
Bartolotta, Paul A.
1991-01-01
Metal Matrix Composites (MMC) and Intermetallic Matrix Composites (IMC) were identified as potential material candidates for advanced aerospace applications. They are especially attractive for high temperature applications which require a low density material that maintains its structural integrity at elevated temperatures. High temperature fatigue resistance plays an important role in determining the structural integrity of the material. This study attempts to examine the relevance of test techniques, failure criterion, and life prediction as they pertain to an IMC material, specifically, unidirectional SiC fiber reinforced titanium aluminide. A series of strain and load controlled fatigue tests were conducted on unidirectional SiC/Ti-24Al-11Nb composite at 425 and 815 C. Several damage mechanism regimes were identified by using a strain-based representation of the data, Talreja's fatigue life diagram concept. Results of these tests were then used to address issues of test control modes, definition of failure, and testing techniques. Finally, a strain-based life prediction method was proposed for an IMC under tensile cyclic loadings at elevated temperatures.
Nonlinear viscoelastic characterization of polymer materials using a dynamic-mechanical methodology
NASA Technical Reports Server (NTRS)
Strganac, Thomas W.; Payne, Debbie Flowers; Biskup, Bruce A.; Letton, Alan
1995-01-01
Polymer materials retrieved from LDEF exhibit nonlinear constitutive behavior; thus the authors present a method to characterize nonlinear viscoelastic behavior using measurements from dynamic (oscillatory) mechanical tests. Frequency-derived measurements are transformed into time-domain properties providing the capability to predict long term material performance without a lengthy experimentation program. Results are presented for thin-film high-performance polymer materials used in the fabrication of high-altitude scientific balloons. Predictions based upon a linear test and analysis approach are shown to deteriorate for moderate to high stress levels expected for extended applications. Tests verify that nonlinear viscoelastic response is induced by large stresses. Hence, an approach is developed in which the stress-dependent behavior is examined in a manner analogous to modeling temperature-dependent behavior with time-temperature correspondence and superposition principles. The development leads to time-stress correspondence and superposition of measurements obtained through dynamic mechanical tests. Predictions of material behavior using measurements based upon linear and nonlinear approaches are compared with experimental results obtained from traditional creep tests. Excellent agreement is shown for the nonlinear model.
Myer, Gregory D.; Ford, Kevin R.; Khoury, Jane; Succop, Paul; Hewett, Timothy E.
2012-01-01
Background Prospective measures of high knee abduction moment (KAM) during landing identify female athletes at high risk for anterior cruciate ligament injury. Laboratory-based measurements demonstrate 90% accuracy in prediction of high KAM. Clinic-based prediction algorithms that employ correlates derived from laboratory-based measurements also demonstrate high accuracy for prediction of high KAM mechanics during landing. Hypotheses Clinic-based measures derived from highly predictive laboratory-based models are valid for the accurate prediction of high KAM status, and simultaneous measurements using laboratory-based and clinic-based techniques highly correlate. Study Design Cohort study (diagnosis); Level of evidence, 2. Methods One hundred female athletes (basketball, soccer, volleyball players) were tested using laboratory-based measures to confirm the validity of identified laboratory-based correlate variables to clinic-based measures included in a prediction algorithm to determine high KAM status. To analyze selected clinic-based surrogate predictors, another cohort of 20 female athletes was simultaneously tested with both clinic-based and laboratory-based measures. Results The prediction model (odds ratio: 95% confidence interval), derived from laboratory-based surrogates including (1) knee valgus motion (1.59: 1.17-2.16 cm), (2) knee flexion range of motion (0.94: 0.89°-1.00°), (3) body mass (0.98: 0.94-1.03 kg), (4) tibia length (1.55: 1.20-2.07 cm), and (5) quadriceps-to-hamstrings ratio (1.70: 0.48%-6.0%), predicted high KAM status with 84% sensitivity and 67% specificity (P < .001). Clinic-based techniques that used a calibrated physician’s scale, a standard measuring tape, standard camcorder, ImageJ software, and an isokinetic dynamometer showed high correlation (knee valgus motion, r = .87; knee flexion range of motion, r = .95; and tibia length, r = .98) to simultaneous laboratory-based measurements. Body mass and quadriceps-to-hamstrings ratio were included in both methodologies and therefore had r values of 1.0. Conclusion Clinically obtainable measures of increased knee valgus, knee flexion range of motion, body mass, tibia length, and quadriceps-to-hamstrings ratio predict high KAM status in female athletes with high sensitivity and specificity. Female athletes who demonstrate high KAM landing mechanics are at increased risk for anterior cruciate ligament injury and are more likely to benefit from neuromuscular training targeted to this risk factor. Use of the developed clinic-based assessment tool may facilitate high-risk athletes’ entry into appropriate interventions that will have greater potential to reduce their injury risk. PMID:20595554
Infants Generate Goal-Based Action Predictions
ERIC Educational Resources Information Center
Cannon, Erin N.; Woodward, Amanda L.
2012-01-01
Predicting the actions of others is critical to smooth social interactions. Prior work suggests that both understanding and anticipation of goal-directed actions appears early in development. In this study, on-line goal prediction was tested explicitly using an adaptation of Woodward's (1998) paradigm for an eye-tracking task. Twenty 11-month-olds…
Bayesian model checking: A comparison of tests
NASA Astrophysics Data System (ADS)
Lucy, L. B.
2018-06-01
Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.
NASA Technical Reports Server (NTRS)
Chapman, A. J.
1973-01-01
Reusable surface insulation materials, which were developed as heat shields for the space shuttle, were tested over a range of conditions including heat-transfer rates between 160 and 620 kW/sq m. The lowest of these heating rates was in a range predicted for the space shuttle during reentry, and the highest was more than twice the predicted entry heating on shuttle areas where reusable surface insulation would be used. Individual specimens were tested repeatedly at increasingly severe conditions to determine the maximum heating rate and temperature capability. A silica-base material experienced only minimal degradation during repeated tests which included conditions twice as severe as predicted shuttle entry and withstood cumulative exposures three times longer than the best mullite material. Mullite-base materials cracked and experienced incipient melting at conditions within the range predicted for shuttle entry. Neither silica nor mullite materials consistently survived the test series with unbroken waterproof surfaces. Surface temperatures for a silica and a mullite material followed a trend expected for noncatalytic surfaces, whereas surface temperatures for a second mullite material appeared to follow a trend expected for a catalytic surface.
An object programming based environment for protein secondary structure prediction.
Giacomini, M; Ruggiero, C; Sacile, R
1996-01-01
The most frequently used methods for protein secondary structure prediction are empirical statistical methods and rule based methods. A consensus system based on object-oriented programming is presented, which integrates the two approaches with the aim of improving the prediction quality. This system uses an object-oriented knowledge representation based on the concepts of conformation, residue and protein, where the conformation class is the basis, the residue class derives from it and the protein class derives from the residue class. The system has been tested with satisfactory results on several proteins of the Brookhaven Protein Data Bank. Its results have been compared with the results of the most widely used prediction methods, and they show a higher prediction capability and greater stability. Moreover, the system itself provides an index of the reliability of its current prediction. This system can also be regarded as a basis structure for programs of this kind.
NASA Technical Reports Server (NTRS)
Celaya, Jose; Kulkarni, Chetan; Biswas, Gautam; Saha, Sankalita; Goebel, Kai
2011-01-01
A remaining useful life prediction methodology for electrolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical degradation model. Electrolytic capacitors are used in several applications ranging from power supplies on critical avionics equipment to power drivers for electro-mechanical actuators. These devices are known for their comparatively low reliability and given their criticality in electronics subsystems they are a good candidate for component level prognostics and health management. Prognostics provides a way to assess remaining useful life of a capacitor based on its current state of health and its anticipated future usage and operational conditions. We present here also, experimental results of an accelerated aging test under electrical stresses. The data obtained in this test form the basis for a remaining life prediction algorithm where a model of the degradation process is suggested. This preliminary remaining life prediction algorithm serves as a demonstration of how prognostics methodologies could be used for electrolytic capacitors. In addition, the use degradation progression data from accelerated aging, provides an avenue for validation of applications of the Kalman filter based prognostics methods typically used for remaining useful life predictions in other applications.
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Kulkarni, Chetan S.; Biswas, Gautam; Goebel, Kai
2012-01-01
A remaining useful life prediction methodology for electrolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical degradation model. Electrolytic capacitors are used in several applications ranging from power supplies on critical avionics equipment to power drivers for electro-mechanical actuators. These devices are known for their comparatively low reliability and given their criticality in electronics subsystems they are a good candidate for component level prognostics and health management. Prognostics provides a way to assess remaining useful life of a capacitor based on its current state of health and its anticipated future usage and operational conditions. We present here also, experimental results of an accelerated aging test under electrical stresses. The data obtained in this test form the basis for a remaining life prediction algorithm where a model of the degradation process is suggested. This preliminary remaining life prediction algorithm serves as a demonstration of how prognostics methodologies could be used for electrolytic capacitors. In addition, the use degradation progression data from accelerated aging, provides an avenue for validation of applications of the Kalman filter based prognostics methods typically used for remaining useful life predictions in other applications.
The effects of physical aging at elevated temperatures on the viscoelastic creep on IM7/K3B
NASA Technical Reports Server (NTRS)
Gates, Thomas S.; Feldman, Mark
1994-01-01
Physical aging at elevated temperature of the advanced composite IM7/K3B was investigated through the use of creep compliance tests. Testing consisted of short term isothermal, creep/recovery with the creep segments performed at constant load. The matrix dominated transverse tensile and in-plane shear behavior were measured at temperatures ranging from 200 to 230 C. Through the use of time based shifting procedures, the aging shift factors, shift rates and momentary master curve parameters were found at each temperature. These material parameters were used as input to a predictive methodology, which was based upon effective time theory and linear viscoelasticity combined with classical lamination theory. Long term creep compliance test data was compared to predictions to verify the method. The model was then used to predict the long term creep behavior for several general laminates.
Allen, D D; Bond, C A
2001-07-01
Good admissions decisions are essential for identifying successful students and good practitioners. Various parameters have been shown to have predictive power for academic success. Previous academic performance, the Pharmacy College Admissions Test (PCAT), and specific prepharmacy courses have been suggested as academic performance indicators. However, critical thinking abilities have not been evaluated. We evaluated the connection between academic success and each of the following predictive parameters: the California Critical Thinking Skills Test (CCTST) score, PCAT score, interview score, overall academic performance prior to admission at a pharmacy school, and performance in specific prepharmacy courses. We confirmed previous reports but demonstrated intriguing results in predicting practice-based skills. Critical thinking skills predict practice-based course success. Also, the CCTST and PCAT scores (Pearson correlation [pc] = 0.448, p < 0.001) were closely related in our students. The strongest predictors of practice-related courses and clerkship success were PCAT (pc=0.237, p<0.001) and CCTST (pc = 0.201, p < 0.001). These findings and other analyses suggest that PCAT may predict critical thinking skills in pharmacy practice courses and clerkships. Further study is needed to confirm this finding and determine which PCAT components predict critical thinking abilities.
NASA Astrophysics Data System (ADS)
Young, B. A.; Gao, Xiaosheng; Srivatsan, T. S.
2009-10-01
In this paper we compare and contrast the crack growth rate of a nickel-base superalloy (Alloy 690) in the Pressurized Water Reactor (PWR) environment. Over the last few years, a preponderance of test data has been gathered on both Alloy 690 thick plate and Alloy 690 tubing. The original model, essentially based on a small data set for thick plate, compensated for temperature, load ratio and stress-intensity range but did not compensate for the fatigue threshold of the material. As additional test data on both plate and tube product became available the model was gradually revised to account for threshold properties. Both the original and revised models generated acceptable results for data that were above 1 × 10 -11 m/s. However, the test data at the lower growth rates were over-predicted by the non-threshold model. Since the original model did not take the fatigue threshold into account, this model predicted no operating stress below which the material would effectively undergo fatigue crack growth. Because of an over-prediction of the growth rate below 1 × 10 -11 m/s, due to a combination of low stress, small crack size and long rise-time, the model in general leads to an under-prediction of the total available life of the components.
A framework for the design and development of physical employment tests and standards.
Payne, W; Harvey, J
2010-07-01
Because operational tasks in the uniformed services (military, police, fire and emergency services) are physically demanding and incur the risk of injury, employment policy in these services is usually competency based and predicated on objective physical employment standards (PESs) based on physical employment tests (PETs). In this paper, a comprehensive framework for the design of PETs and PESs is presented. Three broad approaches to physical employment testing are described and compared: generic predictive testing; task-related predictive testing; task simulation testing. Techniques for the selection of a set of tests with good coverage of job requirements, including job task analysis, physical demands analysis and correlation analysis, are discussed. Regarding individual PETs, theoretical considerations including measurability, discriminating power, reliability and validity, and practical considerations, including development of protocols, resource requirements, administrative issues and safety, are considered. With regard to the setting of PESs, criterion referencing and norm referencing are discussed. STATEMENT OF RELEVANCE: This paper presents an integrated and coherent framework for the development of PESs and hence provides a much needed theoretically based but practically oriented guide for organisations seeking to establish valid and defensible PESs.
Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T
2016-02-01
The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.
Composite Stress Rupture: A New Reliability Model Based on Strength Decay
NASA Technical Reports Server (NTRS)
Reeder, James R.
2012-01-01
A model is proposed to estimate reliability for stress rupture of composite overwrap pressure vessels (COPVs) and similar composite structures. This new reliability model is generated by assuming a strength degradation (or decay) over time. The model suggests that most of the strength decay occurs late in life. The strength decay model will be shown to predict a response similar to that predicted by a traditional reliability model for stress rupture based on tests at a single stress level. In addition, the model predicts that even though there is strength decay due to proof loading, a significant overall increase in reliability is gained by eliminating any weak vessels, which would fail early. The model predicts that there should be significant periods of safe life following proof loading, because time is required for the strength to decay from the proof stress level to the subsequent loading level. Suggestions for testing the strength decay reliability model have been made. If the strength decay reliability model predictions are shown through testing to be accurate, COPVs may be designed to carry a higher level of stress than is currently allowed, which will enable the production of lighter structures
NASA Technical Reports Server (NTRS)
Mcdermott, P. P.
1980-01-01
The design of an accelerated life test program for electric batteries is discussed. A number of observations and suggestions on the procedures and objectives for conducting an accelerated life test program are presented. Equations based on nonlinear regression analysis for predicting the accelerated life test parameters are discussed.
Romanens, Michel; Ackermann, Franz; Spence, John David; Darioli, Roger; Rodondi, Nicolas; Corti, Roberto; Noll, Georg; Schwenkglenks, Matthias; Pencina, Michael
2010-02-01
Cardiovascular risk assessment might be improved with the addition of emerging, new tests derived from atherosclerosis imaging, laboratory tests or functional tests. This article reviews relative risk, odds ratios, receiver-operating curves, posttest risk calculations based on likelihood ratios, the net reclassification improvement and integrated discrimination. This serves to determine whether a new test has an added clinical value on top of conventional risk testing and how this can be verified statistically. Two clinically meaningful examples serve to illustrate novel approaches. This work serves as a review and basic work for the development of new guidelines on cardiovascular risk prediction, taking into account emerging tests, to be proposed by members of the 'Taskforce on Vascular Risk Prediction' under the auspices of the Working Group 'Swiss Atherosclerosis' of the Swiss Society of Cardiology in the future.
NASA Technical Reports Server (NTRS)
Yechout, T. R.; Braman, K. B.
1984-01-01
The development, implementation and flight test evaluation of a performance modeling technique which required a limited amount of quasisteady state flight test data to predict the overall one g performance characteristics of an aircraft. The concept definition phase of the program include development of: (1) the relationship for defining aerodynamic characteristics from quasi steady state maneuvers; (2) a simplified in flight thrust and airflow prediction technique; (3) a flight test maneuvering sequence which efficiently provided definition of baseline aerodynamic and engine characteristics including power effects on lift and drag; and (4) the algorithms necessary for cruise and flight trajectory predictions. Implementation of the concept include design of the overall flight test data flow, definition of instrumentation system and ground test requirements, development and verification of all applicable software and consolidation of the overall requirements in a flight test plan.
Not just the norm: exemplar-based models also predict face aftereffects.
Ross, David A; Deroche, Mickael; Palmeri, Thomas J
2014-02-01
The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted toward a face with attributes opposite to those of the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here, we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation.
Not Just the Norm: Exemplar-Based Models also Predict Face Aftereffects
Ross, David A.; Deroche, Mickael; Palmeri, Thomas J.
2014-01-01
The face recognition literature has considered two competing accounts of how faces are represented within the visual system: Exemplar-based models assume that faces are represented via their similarity to exemplars of previously experienced faces, while norm-based models assume that faces are represented with respect to their deviation from an average face, or norm. Face identity aftereffects have been taken as compelling evidence in favor of a norm-based account over an exemplar-based account. After a relatively brief period of adaptation to an adaptor face, the perceived identity of a test face is shifted towards a face with opposite attributes to the adaptor, suggesting an explicit psychological representation of the norm. Surprisingly, despite near universal recognition that face identity aftereffects imply norm-based coding, there have been no published attempts to simulate the predictions of norm- and exemplar-based models in face adaptation paradigms. Here we implemented and tested variations of norm and exemplar models. Contrary to common claims, our simulations revealed that both an exemplar-based model and a version of a two-pool norm-based model, but not a traditional norm-based model, predict face identity aftereffects following face adaptation. PMID:23690282
Can history and exam alone reliably predict pneumonia?
Graffelman, A W; le Cessie, S; Knuistingh Neven, A; Wilemssen, F E J A; Zonderland, H M; van den Broek, P J
2007-06-01
Prediction rules based on clinical information have been developed to support the diagnosis of pneumonia and help limit the use of expensive diagnostic tests. However, these prediction rules need to be validated in the primary care setting. Adults who met our definition of lower respiratory tract infection (LRTI) were recruited for a prospective study on the causes of LRTI, between November 15, 1998 and June 1, 2001 in the Leiden region of The Netherlands. Clinical information was collected and chest radiography was performed. A literature search was also done to find prediction rules for pneumonia. 129 patients--26 with pneumonia and 103 without--were included, and 6 prediction rules were applied. Only the model with the addition of a test for C-reactive protein had a significant area under the curve of 0.69 (95% confidence interval [CI], 0.58-0.80), with a positive predictive value of 47% (95% CI, 23-71) and a negative predictive value of 84% (95% CI, 77-91). The pretest probabilities for the presence and absence of pneumonia were 20% and 80%, respectively. Models based only on clinical information do not reliably predict the presence of pneumonia. The addition of an elevated C-reactive protein level seems of little value.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing
2018-01-15
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.
Application of a Physics-Based Stabilization Criterion to Flight System Thermal Testing
NASA Technical Reports Server (NTRS)
Baker, Charles; Garrison, Matthew; Cottingham, Christine; Peabody, Sharon
2010-01-01
The theory shown here can provide thermal stability criteria based on physics and a goal steady state error rather than on an arbitrary "X% Q/mC(sub P)" method. The ability to accurately predict steady-state temperatures well before thermal balance is reached could be very useful during testing. This holds true for systems where components are changing temperature at different rates, although it works better for the components closest to the sink. However, the application to these test cases shows some significant limitations: This theory quickly falls apart if the thermal control system in question is tightly coupled to a large mass not accounted for in the calculations, so it is more useful in subsystem-level testing than full orbiter tests. Tight couplings to a fluctuating sink causes noise in the steady state temperature predictions.
Offgassing Characterization of the Columbus Laboratory Module
NASA Technical Reports Server (NTRS)
Rampini, riccardo; Lobascio, Cesare; Perry, Jay L.; Hinderer, Stephan
2005-01-01
Trace gaseous contamination in the cabin environment is a major concern for manned spacecraft, especially those designed for long duration missions, such as the International Space Station (ISS). During the design phase, predicting the European-built Columbus laboratory module s contribution to the ISS s overall trace contaminant load relied on "trace gas budgeting" based on material level and assembled article tests data. In support of the Qualification Review, a final offgassing test has been performed on the complete Columbus module to gain cumulative system offgassing data. Comparison between the results of the predicted offgassing load based on the budgeted material/assembled article-level offgassing rates and the module-level offgassing test is presented. The Columbus module offgassing test results are also compared to results from similar tests conducted for Node 1, U.S. Laboratory, and Airlock modules.
Pretreatment data is highly predictive of liver chemistry signals in clinical trials
Cai, Zhaohui; Bresell, Anders; Steinberg, Mark H; Silberg, Debra G; Furlong, Stephen T
2012-01-01
Purpose The goal of this retrospective analysis was to assess how well predictive models could determine which patients would develop liver chemistry signals during clinical trials based on their pretreatment (baseline) information. Patients and methods Based on data from 24 late-stage clinical trials, classification models were developed to predict liver chemistry outcomes using baseline information, which included demographics, medical history, concomitant medications, and baseline laboratory results. Results Predictive models using baseline data predicted which patients would develop liver signals during the trials with average validation accuracy around 80%. Baseline levels of individual liver chemistry tests were most important for predicting their own elevations during the trials. High bilirubin levels at baseline were not uncommon and were associated with a high risk of developing biochemical Hy’s law cases. Baseline γ-glutamyltransferase (GGT) level appeared to have some predictive value, but did not increase predictability beyond using established liver chemistry tests. Conclusion It is possible to predict which patients are at a higher risk of developing liver chemistry signals using pretreatment (baseline) data. Derived knowledge from such predictions may allow proactive and targeted risk management, and the type of analysis described here could help determine whether new biomarkers offer improved performance over established ones. PMID:23226004
[Prediction of 137Cs accumulation in animal products in the territory of Semipalatinsk test site].
Spiridonov, S I; Gontarenko, I A; Mukusheva, M K; Fesenko, S V; Semioshkina, N A
2005-01-01
The paper describes mathematical models for 137Cs behavior in the organism of horses and sheep pasturing on the bording area to the testing area "Ground Zero" of the Semipalatinsk Test Site. The models are parameterized on the base of the data from an experiment with the breeds of animals now commonly encountered within the Semipalatinsk Test Site. The predictive calculations with the models devised have shown that 137Cs concentrations in milk of horses and sheep pasturingon the testing area to "Ground Zero" can exceed the adopted standards during a long period of time.
Measures of accuracy and performance of diagnostic tests.
Drobatz, Kenneth J
2009-05-01
Diagnostic tests are integral to the practice of veterinary cardiology, any other specialty, and general veterinary medicine. Developing and understanding diagnostic tests is one of the cornerstones of clinical research. This manuscript describes the diagnostic test properties including sensitivity, specificity, predictive value, likelihood ratio, receiver operating characteristic curve. Review of practical book chapters and standard statistics manuscripts. Diagnostics such as sensitivity, specificity, predictive value, likelihood ratio, and receiver operating characteristic curve are described and illustrated. Basic understanding of how diagnostic tests are developed and interpreted is essential in reviewing clinical scientific papers and understanding evidence based medicine.
Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T.; Brand, Thomas
2016-01-01
To characterize the individual patient’s hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The “typical” audiogram shapes from Bisgaard et al with or without a “typical” level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. PMID:27604782
Kormány, Róbert; Fekete, Jenő; Guillarme, Davy; Fekete, Szabolcs
2014-02-01
The goal of this study was to evaluate the accuracy of simulated robustness testing using commercial modelling software (DryLab) and state-of-the-art stationary phases. For this purpose, a mixture of amlodipine and its seven related impurities was analyzed on short narrow bore columns (50×2.1mm, packed with sub-2μm particles) providing short analysis times. The performance of commercial modelling software for robustness testing was systematically compared to experimental measurements and DoE based predictions. We have demonstrated that the reliability of predictions was good, since the predicted retention times and resolutions were in good agreement with the experimental ones at the edges of the design space. In average, the retention time relative errors were <1.0%, while the predicted critical resolution errors were comprised between 6.9 and 17.2%. Because the simulated robustness testing requires significantly less experimental work than the DoE based predictions, we think that robustness could now be investigated in the early stage of method development. Moreover, the column interchangeability, which is also an important part of robustness testing, was investigated considering five different C8 and C18 columns packed with sub-2μm particles. Again, thanks to modelling software, we proved that the separation was feasible on all columns within the same analysis time (less than 4min), by proper adjustments of variables. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Parkin, G.; O'Donnell, G.; Ewen, J.; Bathurst, J. C.; O'Connell, P. E.; Lavabre, J.
1996-02-01
Validation methods commonly used to test catchment models are not capable of demonstrating a model's fitness for making predictions for catchments where the catchment response is not known (including hypothetical catchments, and future conditions of existing catchments which are subject to land-use or climate change). This paper describes the first use of a new method of validation (Ewen and Parkin, 1996. J. Hydrol., 175: 583-594) designed to address these types of application; the method involves making 'blind' predictions of selected hydrological responses which are considered important for a particular application. SHETRAN (a physically based, distributed catchment modelling system) is tested on a small Mediterranean catchment. The test involves quantification of the uncertainty in four predicted features of the catchment response (continuous hydrograph, peak discharge rates, monthly runoff, and total runoff), and comparison of observations with the predicted ranges for these features. The results of this test are considered encouraging.
Karzmark, Peter; Deutsch, Gayle K
2018-01-01
This investigation was designed to determine the predictive accuracy of a comprehensive neuropsychological and brief neuropsychological test battery with regard to the capacity to perform instrumental activities of daily living (IADLs). Accuracy statistics that included measures of sensitivity, specificity, positive and negative predicted power and positive likelihood ratio were calculated for both types of batteries. The sample was drawn from a general neurological group of adults (n = 117) that included a number of older participants (age >55; n = 38). Standardized neuropsychological assessments were administered to all participants and were comprised of the Halstead Reitan Battery and portions of the Wechsler Adult Intelligence Scale-III. A comprehensive test battery yielded a moderate increase over base-rate in predictive accuracy that generalized to older individuals. There was only limited support for using a brief battery, for although sensitivity was high, specificity was low. We found that a comprehensive neuropsychological test battery provided good classification accuracy for predicting IADL capacity.
NASA Astrophysics Data System (ADS)
Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun
2017-11-01
In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.
A weighted generalized score statistic for comparison of predictive values of diagnostic tests.
Kosinski, Andrzej S
2013-03-15
Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations that are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we presented, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic that incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, always reduces to the score statistic in the independent samples situation, and preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe that the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the WGS test statistic in a general GEE setting. Copyright © 2012 John Wiley & Sons, Ltd.
A weighted generalized score statistic for comparison of predictive values of diagnostic tests
Kosinski, Andrzej S.
2013-01-01
Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations which are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we present, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic which incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, it always reduces to the score statistic in the independent samples situation, and it preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the weighted generalized score test statistic in a general GEE setting. PMID:22912343
Flight test derived heating math models for critical locations on the orbiter during reentry
NASA Technical Reports Server (NTRS)
Hertzler, E. K.; Phillips, P. W.
1983-01-01
An analysis technique was developed for expanding the aerothermodynamic envelope of the Space Shuttle without subjecting the vehicle to sustained flight at more stressing heating conditions. A transient analysis program was developed to take advantage of the transient maneuvers that were flown as part of this analysis technique. Heat rates were derived from flight test data for various locations on the orbiter. The flight derived heat rates were used to update heating models based on predicted data. Future missions were then analyzed based on these flight adjusted models. A technique for comparing flight and predicted heating rate data and the extrapolation of the data to predict the aerothermodynamic environment of future missions is presented.
Predicting future protection of respirator users: Statistical approaches and practical implications.
Hu, Chengcheng; Harber, Philip; Su, Jing
2016-01-01
The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing.
Teyhen, Deydre S; Shaffer, Scott W; Butler, Robert J; Goffar, Stephen L; Kiesel, Kyle B; Rhon, Daniel I; Boyles, Robert E; McMillian, Daniel J; Williamson, Jared N; Plisky, Phillip J
2016-10-01
Performance on movement tests helps to predict injury risk in a variety of physically active populations. Understanding baseline measures for normal is an important first step. Determine differences in physical performance assessments and describe normative values for these tests based on military unit type. Assessment of power, balance, mobility, motor control, and performance on the Army Physical Fitness Test were assessed in a cohort of 1,466 soldiers. Analysis of variance was performed to compare the results based on military unit type (Rangers, Combat, Combat Service, and Combat Service Support) and analysis of covariance was performed to determine the influence of age and gender. Rangers performed the best on all performance and fitness measures (p < 0.05). Combat soldiers performed better than Combat Service and Service Support soldiers on several physical performance tests and the Army Physical Fitness Test (p < 0.05). Performance in Combat Service and Service Support soldiers was equivalent on most measures (p < 0.05). Functional performance and level of fitness varied significantly by military unit type. Understanding these differences will provide a foundation for future injury prediction and prevention strategies. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.
Life prediction modeling based on cyclic damage accumulation
NASA Technical Reports Server (NTRS)
Nelson, Richard S.
1988-01-01
A high temperature, low cycle fatigue life prediction method was developed. This method, Cyclic Damage Accumulation (CDA), was developed for use in predicting the crack initiation lifetime of gas turbine engine materials, where initiation was defined as a 0.030 inch surface length crack. A principal engineering feature of the CDA method is the minimum data base required for implementation. Model constants can be evaluated through a few simple specimen tests such as monotonic loading and rapic cycle fatigue. The method was expanded to account for the effects on creep-fatigue life of complex loadings such as thermomechanical fatigue, hold periods, waveshapes, mean stresses, multiaxiality, cumulative damage, coatings, and environmental attack. A significant data base was generated on the behavior of the cast nickel-base superalloy B1900+Hf, including hundreds of specimen tests under such loading conditions. This information is being used to refine and extend the CDA life prediction model, which is now nearing completion. The model is also being verified using additional specimen tests on wrought INCO 718, and the final version of the model is expected to be adaptable to most any high-temperature alloy. The model is currently available in the form of equations and related constants. A proposed contract addition will make the model available in the near future in the form of a computer code to potential users.
Qiu, Zhijun; Zhou, Bo; Yuan, Jiangfeng
2017-11-21
Protein-protein interaction site (PPIS) prediction must deal with the diversity of interaction sites that limits their prediction accuracy. Use of proteins with unknown or unidentified interactions can also lead to missing interfaces. Such data errors are often brought into the training dataset. In response to these two problems, we used the minimum covariance determinant (MCD) method to refine the training data to build a predictor with better performance, utilizing its ability of removing outliers. In order to predict test data in practice, a method based on Mahalanobis distance was devised to select proper test data as input for the predictor. With leave-one-validation and independent test, after the Mahalanobis distance screening, our method achieved higher performance according to Matthews correlation coefficient (MCC), although only a part of test data could be predicted. These results indicate that data refinement is an efficient approach to improve protein-protein interaction site prediction. By further optimizing our method, it is hopeful to develop predictors of better performance and wide range of application. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Sadunas, J. A.; French, E. P.; Sexton, H.
1973-01-01
A 1/25 scale model S-2 stage base region thermal environment test is presented. Analytical results are included which reflect the effect of engine operating conditions, model scale, turbo-pump exhaust gas injection on base region thermal environment. Comparisons are made between full scale flight data, model test data, and analytical results. The report is prepared in two volumes. The description of analytical predictions and comparisons with flight data are presented. Tabulation of the test data is provided.
Marschollek, M; Nemitz, G; Gietzelt, M; Wolf, K H; Meyer Zu Schwabedissen, H; Haux, R
2009-08-01
Falls are among the predominant causes for morbidity and mortality in elderly persons and occur most often in geriatric clinics. Despite several studies that have identified parameters associated with elderly patients' fall risk, prediction models -- e.g., based on geriatric assessment data -- are currently not used on a regular basis. Furthermore, technical aids to objectively assess mobility-associated parameters are currently not used. To assess group differences in clinical as well as common geriatric assessment data and sensory gait measurements between fallers and non-fallers in a geriatric sample, and to derive and compare two prediction models based on assessment data alone (model #1) and added sensory measurement data (model #2). For a sample of n=110 geriatric in-patients (81 women, 29 men) the following fall risk-associated assessments were performed: Timed 'Up & Go' (TUG) test, STRATIFY score and Barthel index. During the TUG test the subjects wore a triaxial accelerometer, and sensory gait parameters were extracted from the data recorded. Group differences between fallers (n=26) and non-fallers (n=84) were compared using Student's t-test. Two classification tree prediction models were computed and compared. Significant differences between the two groups were found for the following parameters: time to complete the TUG test, transfer item (Barthel), recent falls (STRATIFY), pelvic sway while walking and step length. Prediction model #1 (using common assessment data only) showed a sensitivity of 38.5% and a specificity of 97.6%, prediction model #2 (assessment data plus sensory gait parameters) performed with 57.7% and 100%, respectively. Significant differences between fallers and non-fallers among geriatric in-patients can be detected for several assessment subscores as well as parameters recorded by simple accelerometric measurements during a common mobility test. Existing geriatric assessment data may be used for falls prediction on a regular basis. Adding sensory data improves the specificity of our test markedly.
Evaluation of a training curriculum for prehospital trauma ultrasound.
Press, Gregory M; Miller, Sara K; Hassan, Iman A; Blankenship, Robert; del Junco, Deborah; Camp, Elizabeth; Holcomb, John B
2013-12-01
In the United States, ultrasound has rarely been incorporated into prehospital care, and scant descriptions of the processes used to train prehospital providers are available. Our objective was to evaluate the effectiveness of an extended focused assessment with sonography for trauma (EFAST) training curriculum that incorporated multiple educational modalities. We also aimed to determine if certain demographic factors predicted successful completion. All aeromedical prehospital providers (APPs) for a Level I trauma center took a 25-question computer-based test to ascertain baseline knowledge. Questions were categorized by content and format. Training over a 2-month period included a didactic course, a hands-on training session, proctored scanning sessions in the Emergency Department, six Internet-based training modules, pocket flashcards, a review session, and remedial training. At the conclusion of the training curriculum, the same test and an objective structured clinical examination were administered to evaluate knowledge gained. Thirty-three of 34 APPs completed training. The overall pre-test and post-test means and all content and format subsets showed significant improvement (p < 0.0001 for all). No APP passed the pre-test, and 28 of 33 passed the post-test with a mean score of 78%. No demographic variable predicted passing the post-test. Twenty-seven of 33 APPs passed the objective structured clinical examination, and the only predictive variable was passing the post-test (odds ratio 1.21, 95% confidence interval 1.00-1.25, p = 0.045). The implementation of a multifaceted EFAST prehospital training program is feasible. Significant improvement in overall and subset testing scores suggests that the test instrument was internally consistent and sufficiently sensitive to capture knowledge gained as a result of the training. Demographic variables were not predictive of test success. Copyright © 2013 Elsevier Inc. All rights reserved.
Rollins, Brent L; Ramakrishnan, Shravanan; Perri, Matthew
2014-01-01
Direct-to-consumer (DTC) advertising of predictive genetic tests (PGTs) has added a new dimension to health advertising. This study used an online survey based on the health belief model framework to examine and more fully understand consumers' responses and behavioral intentions in response to a PGT DTC advertisement. Overall, consumers reported moderate intentions to talk with their doctor and seek more information about PGTs after advertisement exposure, though consumers did not seem ready to take the advertised test or engage in active information search. Those who perceived greater threat from the disease, however, had significantly greater behavioral intentions and information search behavior.
Lantelme, Pierre; Eltchaninoff, Hélène; Rabilloud, Muriel; Souteyrand, Géraud; Dupré, Marion; Spaziano, Marco; Bonnet, Marc; Becle, Clément; Riche, Benjamin; Durand, Eric; Bouvier, Erik; Dacher, Jean-Nicolas; Courand, Pierre-Yves; Cassagnes, Lucie; Dávila Serrano, Eduardo E; Motreff, Pascal; Boussel, Loic; Lefèvre, Thierry; Harbaoui, Brahim
2018-05-11
The aim of this study was to develop a new scoring system based on thoracic aortic calcification (TAC) to predict 1-year cardiovascular and all-cause mortality. A calcified aorta is often associated with poor prognosis after transcatheter aortic valve replacement (TAVR). A risk score encompassing aortic calcification may be valuable in identifying poor TAVR responders. The C 4 CAPRI (4 Cities for Assessing CAlcification PRognostic Impact) multicenter study included a training cohort (1,425 patients treated using TAVR between 2010 and 2014) and a contemporary test cohort (311 patients treated in 2015). TAC was measured by computed tomography pre-TAVR. CAPRI risk scores were based on the linear predictors of Cox models including TAC in addition to comorbidities and demographic, atherosclerotic disease and cardiac function factors. CAPRI scores were constructed and tested in 2 independent cohorts. Cardiovascular and all-cause mortality at 1 year was 13.0% and 17.9%, respectively, in the training cohort and 8.2% and 11.8% in the test cohort. The inclusion of TAC in the model improved prediction: 1-cm 3 increase in TAC was associated with a 6% increase in cardiovascular mortality and a 4% increase in all-cause mortality. The predicted and observed survival probabilities were highly correlated (slopes >0.9 for both cardiovascular and all-cause mortality). The model's predictive power was fair (AUC 68% [95% confidence interval [CI]: 64-72]) for both cardiovascular and all-cause mortality. The model performed similarly in the training and test cohorts. The CAPRI score, which combines the TAC variable with classical prognostic factors, is predictive of 1-year cardiovascular and all-cause mortality. Its predictive performance was confirmed in an independent contemporary cohort. CAPRI scores are highly relevant to current practice and strengthen the evidence base for decision making in valvular interventions. Its routine use may help prevent futile procedures. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Song, Yang; Zhang, Yu-Dong; Yan, Xu; Liu, Hui; Zhou, Minxiong; Hu, Bingwen; Yang, Guang
2018-04-16
Deep learning is the most promising methodology for automatic computer-aided diagnosis of prostate cancer (PCa) with multiparametric MRI (mp-MRI). To develop an automatic approach based on deep convolutional neural network (DCNN) to classify PCa and noncancerous tissues (NC) with mp-MRI. Retrospective. In all, 195 patients with localized PCa were collected from a PROSTATEx database. In total, 159/17/19 patients with 444/48/55 observations (215/23/23 PCas and 229/25/32 NCs) were randomly selected for training/validation/testing, respectively. T 2 -weighted, diffusion-weighted, and apparent diffusion coefficient images. A radiologist manually labeled the regions of interest of PCas and NCs and estimated the Prostate Imaging Reporting and Data System (PI-RADS) scores for each region. Inspired by VGG-Net, we designed a patch-based DCNN model to distinguish between PCa and NCs based on a combination of mp-MRI data. Additionally, an enhanced prediction method was used to improve the prediction accuracy. The performance of DCNN prediction was tested using a receiver operating characteristic (ROC) curve, and the area under the ROC curve (AUC), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Moreover, the predicted result was compared with the PI-RADS score to evaluate its clinical value using decision curve analysis. Two-sided Wilcoxon signed-rank test with statistical significance set at 0.05. The DCNN produced excellent diagnostic performance in distinguishing between PCa and NC for testing datasets with an AUC of 0.944 (95% confidence interval: 0.876-0.994), sensitivity of 87.0%, specificity of 90.6%, PPV of 87.0%, and NPV of 90.6%. The decision curve analysis revealed that the joint model of PI-RADS and DCNN provided additional net benefits compared with the DCNN model and the PI-RADS scheme. The proposed DCNN-based model with enhanced prediction yielded high performance in statistical analysis, suggesting that DCNN could be used in computer-aided diagnosis (CAD) for PCa classification. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.
Cognitive Style and Self-Efficacy: Predicting Student Success in Online Distance Education
ERIC Educational Resources Information Center
DeTure, Monica
2004-01-01
This study was designed to identify those learner attributes that may be used to predict student success (in terms of grade point average) in a Web-based distance education setting. Students enrolled in six Web-based, general education distance education courses at a community college were asked to complete the Group Embedded Figures Test for…
ERIC Educational Resources Information Center
Diana, Rachel A.; Yonelinas, Andrew P.; Ranganath, Charan
2008-01-01
Performance on tests of source memory is typically based on recollection of contextual information associated with an item. However, recent neuroimaging results have suggested that the perirhinal cortex, a region thought to support familiarity-based item recognition, may support source attributions if source information is encoded as a feature of…
Predicting links based on knowledge dissemination in complex network
NASA Astrophysics Data System (ADS)
Zhou, Wen; Jia, Yifan
2017-04-01
Link prediction is the task of mining the missing links in networks or predicting the next vertex pair to be connected by a link. A lot of link prediction methods were inspired by evolutionary processes of networks. In this paper, a new mechanism for the formation of complex networks called knowledge dissemination (KD) is proposed with the assumption of knowledge disseminating through the paths of a network. Accordingly, a new link prediction method-knowledge dissemination based link prediction (KDLP)-is proposed to test KD. KDLP characterizes vertex similarity based on knowledge quantity (KQ) which measures the importance of a vertex through H-index. Extensive numerical simulations on six real-world networks demonstrate that KDLP is a strong link prediction method which performs at a higher prediction accuracy than four well-known similarity measures including common neighbors, local path index, average commute time and matrix forest index. Furthermore, based on the common conclusion that an excellent link prediction method reveals a good evolving mechanism, the experiment results suggest that KD is a considerable network evolving mechanism for the formation of complex networks.
Gaziano, Thomas A; Young, Cynthia R; Fitzmaurice, Garrett; Atwood, Sidney; Gaziano, J Michael
2008-01-01
Summary Background Around 80% of all cardiovascular deaths occur in developing countries. Assessment of those patients at high risk is an important strategy for prevention. Since developing countries have limited resources for prevention strategies that require laboratory testing, we assessed if a risk prediction method that did not require any laboratory tests could be as accurate as one requiring laboratory information. Methods The National Health and Nutrition Examination Survey (NHANES) was a prospective cohort study of 14 407 US participants aged between 25–74 years at the time they were first examined (between 1971 and 1975). Our follow-up study population included participants with complete information on these surveys who did not report a history of cardiovascular disease (myocardial infarction, heart failure, stroke, angina) or cancer, yielding an analysis dataset N=6186. We compared how well either method could predict first-time fatal and non-fatal cardiovascular disease events in this cohort. For the laboratory-based model, which required blood testing, we used standard risk factors to assess risk of cardiovascular disease: age, systolic blood pressure, smoking status, total cholesterol, reported diabetes status, and current treatment for hypertension. For the non-laboratory-based model, we substituted body-mass index for cholesterol. Findings In the cohort of 6186, there were 1529 first-time cardiovascular events and 578 (38%) deaths due to cardiovascular disease over 21 years. In women, the laboratory-based model was useful for predicting events, with a c statistic of 0·829. The c statistic of the non-laboratory-based model was 0·831. In men, the results were similar (0·784 for the laboratory-based model and 0·783 for the non-laboratory-based model). Results were similar between the laboratory-based and non-laboratory-based models in both men and women when restricted to fatal events only. Interpretation A method that uses non-laboratory-based risk factors predicted cardiovascular events as accurately as one that relied on laboratory-based values. This approach could simplify risk assessment in situations where laboratory testing is inconvenient or unavailable. PMID:18342687
Survival Regression Modeling Strategies in CVD Prediction.
Barkhordari, Mahnaz; Padyab, Mojgan; Sardarinia, Mahsa; Hadaegh, Farzad; Azizi, Fereidoun; Bozorgmanesh, Mohammadreza
2016-04-01
A fundamental part of prevention is prediction. Potential predictors are the sine qua non of prediction models. However, whether incorporating novel predictors to prediction models could be directly translated to added predictive value remains an area of dispute. The difference between the predictive power of a predictive model with (enhanced model) and without (baseline model) a certain predictor is generally regarded as an indicator of the predictive value added by that predictor. Indices such as discrimination and calibration have long been used in this regard. Recently, the use of added predictive value has been suggested while comparing the predictive performances of the predictive models with and without novel biomarkers. User-friendly statistical software capable of implementing novel statistical procedures is conspicuously lacking. This shortcoming has restricted implementation of such novel model assessment methods. We aimed to construct Stata commands to help researchers obtain the aforementioned statistical indices. We have written Stata commands that are intended to help researchers obtain the following. 1, Nam-D'Agostino X 2 goodness of fit test; 2, Cut point-free and cut point-based net reclassification improvement index (NRI), relative absolute integrated discriminatory improvement index (IDI), and survival-based regression analyses. We applied the commands to real data on women participating in the Tehran lipid and glucose study (TLGS) to examine if information relating to a family history of premature cardiovascular disease (CVD), waist circumference, and fasting plasma glucose can improve predictive performance of Framingham's general CVD risk algorithm. The command is adpredsurv for survival models. Herein we have described the Stata package "adpredsurv" for calculation of the Nam-D'Agostino X 2 goodness of fit test as well as cut point-free and cut point-based NRI, relative and absolute IDI, and survival-based regression analyses. We hope this work encourages the use of novel methods in examining predictive capacity of the emerging plethora of novel biomarkers.
Lauer, Michael S; Pothier, Claire E; Magid, David J; Smith, S Scott; Kattan, Michael W
2007-12-18
The exercise treadmill test is recommended for risk stratification among patients with intermediate to high pretest probability of coronary artery disease. Posttest risk stratification is based on the Duke treadmill score, which includes only functional capacity and measures of ischemia. To develop and externally validate a post-treadmill test, multivariable mortality prediction rule for adults with suspected coronary artery disease and normal electrocardiograms. Prospective cohort study conducted from September 1990 to May 2004. Exercise treadmill laboratories in a major medical center (derivation set) and a separate HMO (validation set). 33,268 patients in the derivation set and 5821 in the validation set. All patients had normal electrocardiograms and were referred for evaluation of suspected coronary artery disease. The derivation set patients were followed for a median of 6.2 years. A nomogram-illustrated model was derived on the basis of variables easily obtained in the stress laboratory, including age; sex; history of smoking, hypertension, diabetes, or typical angina; and exercise findings of functional capacity, ST-segment changes, symptoms, heart rate recovery, and frequent ventricular ectopy in recovery. The derivation data set included 1619 deaths. Although both the Duke treadmill score and our nomogram-illustrated model were significantly associated with death (P < 0.001), the nomogram was better at discrimination (concordance index for right-censored data, 0.83 vs. 0.73) and calibration. We reclassified many patients with intermediate- to high-risk Duke treadmill scores as low risk on the basis of the nomogram. The model also predicted 3-year mortality rates well in the validation set: Based on an optimal cut-point for a negative predictive value of 0.97, derivation and validation rates were, respectively, 1.7% and 2.5% below the cut-point and 25% and 29% above the cut-point. Blood test-based measures or left ventricular ejection fraction were not included. The nomogram can be applied only to patients with a normal electrocardiogram. Clinical utility remains to be tested. A simple nomogram based on easily obtained pretest and exercise test variables predicted all-cause mortality in adults with suspected coronary artery disease and normal electrocardiograms.
Prediction Accuracy of Error Rates for MPTB Space Experiment
NASA Technical Reports Server (NTRS)
Buchner, S. P.; Campbell, A. B.; Davis, D.; McMorrow, D.; Petersen, E. L.; Stassinopoulos, E. G.; Ritter, J. C.
1998-01-01
This paper addresses the accuracy of radiation-induced upset-rate predictions in space using the results of ground-based measurements together with standard environmental and device models. The study is focused on two part types - 16 Mb NEC DRAM's (UPD4216) and 1 Kb SRAM's (AMD93L422) - both of which are currently in space on board the Microelectronics and Photonics Test Bed (MPTB). To date, ground-based measurements of proton-induced single event upset (SEM cross sections as a function of energy have been obtained and combined with models of the proton environment to predict proton-induced error rates in space. The role played by uncertainties in the environmental models will be determined by comparing the modeled radiation environment with the actual environment measured aboard MPTB. Heavy-ion induced upsets have also been obtained from MPTB and will be compared with the "predicted" error rate following ground testing that will be done in the near future. These results should help identify sources of uncertainty in predictions of SEU rates in space.
Prediction of protein-protein interactions based on PseAA composition and hybrid feature selection.
Liu, Liang; Cai, Yudong; Lu, Wencong; Feng, Kaiyan; Peng, Chunrong; Niu, Bing
2009-03-06
Based on pseudo amino acid (PseAA) composition and a novel hybrid feature selection frame, this paper presents a computational system to predict the PPIs (protein-protein interactions) using 8796 protein pairs. These pairs are coded by PseAA composition, resulting in 114 features. A hybrid feature selection system, mRMR-KNNs-wrapper, is applied to obtain an optimized feature set by excluding poor-performed and/or redundant features, resulting in 103 remaining features. Using the optimized 103-feature subset, a prediction model is trained and tested in the k-nearest neighbors (KNNs) learning system. This prediction model achieves an overall accurate prediction rate of 76.18%, evaluated by 10-fold cross-validation test, which is 1.46% higher than using the initial 114 features and is 6.51% higher than the 20 features, coded by amino acid compositions. The PPIs predictor, developed for this research, is available for public use at http://chemdata.shu.edu.cn/ppi.
Arredondo, Elva Maria; Pollak, Kathryn; Costanzo, Philip R
2008-12-01
The goals of this study are to evaluate (a) the effectiveness of a stage model in predicting Latinas' self-report of obtaining a Pap test and (b) the unique role of psychosocial/cultural factors in predicting progress toward behavior change. One-on-one structured interviews with monolingual Spanish-speaking Latinas (n=190) were conducted. Most participants (85%) intended to obtain a Pap smear within 1 year; therefore, staging women based on intention was not possible. Moreover, results from the polychotomous hierarchical logistic regression suggest that psychosocial and cultural factors were independent predictors of Pap test history. A stage model may not be appropriate for predicting Pap test screening among Latinas. Results suggest that unique cultural, psychosocial, and demographic factors may inhibit cervical cancer screening practices. Clinicians may need to tailor messages on these cultural and psychosocial factors to increase Pap testing among Latinas.
Aiba née Kaneko, Maki; Hirota, Morihiko; Kouzuki, Hirokazu; Mori, Masaaki
2015-02-01
Genotoxicity is the most commonly used endpoint to predict the carcinogenicity of chemicals. The International Conference on Harmonization (ICH) M7 Guideline on Assessment and Control of DNA Reactive (Mutagenic) Impurities in Pharmaceuticals to Limit Potential Carcinogenic Risk offers guidance on (quantitative) structure-activity relationship ((Q)SAR) methodologies that predict the outcome of bacterial mutagenicity assay for actual and potential impurities. We examined the effectiveness of the (Q)SAR approach with the combination of DEREK NEXUS as an expert rule-based system and ADMEWorks as a statistics-based system for the prediction of not only mutagenic potential in the Ames test, but also genotoxic potential in mutagenicity and clastogenicity tests, using a data set of 342 chemicals extracted from the literature. The prediction of mutagenic potential or genotoxic potential by DEREK NEXUS or ADMEWorks showed high values of sensitivity and concordance, while prediction by the combination of DEREK NEXUS and ADMEWorks (battery system) showed the highest values of sensitivity and concordance among the three methods, but the lowest value of specificity. The number of false negatives was reduced with the battery system. We also separately predicted the mutagenic potential and genotoxic potential of 41 cosmetic ingredients listed in the International Nomenclature of Cosmetic Ingredients (INCI) among the 342 chemicals. Although specificity was low with the battery system, sensitivity and concordance were high. These results suggest that the battery system consisting of DEREK NEXUS and ADMEWorks is useful for prediction of genotoxic potential of chemicals, including cosmetic ingredients.
Aadland, E; Andersen, L B; Lerum, Ø; Resaland, G K
2018-03-01
Measurement of aerobic fitness by determining peak oxygen consumption (VO 2peak ) is often not feasible in children and adolescents, thus field tests such as the Andersen test are required in many settings, for example in most school-based studies. This study provides cross-validated prediction equations for VO 2peak based on the Andersen test in 10 and 16-year-old children. We included 235 children (n = 113 10-year olds and 122 16-year olds) who performed the Andersen test and a progressive treadmill test to exhaustion to determine VO 2peak . Joint and sex-specific prediction equations were derived and tested in 20 random samples. Performance in terms of systematic (bias) and random error (limits of agreement) was evaluated by means of Bland-Altman plots. Bias varied from -4.28 to 5.25 mL/kg/min across testing datasets, sex, and the 2 age groups. Sex-specific equations (mean bias -0.42 to 0.16 mL/kg/min) performed somewhat better than joint equations (-1.07 to 0.84 mL/kg/min). Limits of agreement were substantial across all datasets, sex, and both age groups, but were slightly lower in 16-year olds (5.84-13.29 mL/kg/min) compared to 10-year olds (9.60-15.15 mL/kg/min). We suggest the presented equations can be used to predict VO 2peak from the Andersen test performance in children and adolescents on a group level. Although the Andersen test appears to be a good measure of aerobic fitness, researchers should interpret cross-sectional individual-level predictions of VO 2peak with caution due to large random measurement errors. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Fatigue life prediction of liquid rocket engine combustor with subscale test verification
NASA Astrophysics Data System (ADS)
Sung, In-Kyung
Reusable rocket systems such as the Space Shuttle introduced a new era in propulsion system design for economic feasibility. Practical reusable systems require an order of magnitude increase in life. To achieve this improved methods are needed to assess failure mechanisms and to predict life cycles of rocket combustor. A general goal of the research was to demonstrate the use of subscale rocket combustor prototype in a cost-effective test program. Life limiting factors and metal behaviors under repeated loads were surveyed and reviewed. The life prediction theories are presented, with an emphasis on studies that used subscale test hardware for model validation. From this review, low cycle fatigue (LCF) and creep-fatigue interaction (ratcheting) were identified as the main life limiting factors of the combustor. Several life prediction methods such as conventional and advanced viscoplastic models were used to predict life cycle due to low cycle thermal stress, transient effects, and creep rupture damage. Creep-fatigue interaction and cyclic hardening were also investigated. A prediction method based on 2D beam theory was modified using 3D plate deformation theory to provide an extended prediction method. For experimental validation two small scale annular plug nozzle thrusters were designed, built and tested. The test article was composed of a water-cooled liner, plug annular nozzle and 200 psia precombustor that used decomposed hydrogen peroxide as the oxidizer and JP-8 as the fuel. The first combustor was tested cyclically at the Advanced Propellants and Combustion Laboratory at Purdue University. Testing was stopped after 140 cycles due to an unpredicted failure mechanism due to an increasing hot spot in the location where failure was predicted. A second combustor was designed to avoid the previous failure, however, it was over pressurized and deformed beyond repair during cold-flow test. The test results are discussed and compared to the analytical and numerical predictions. A detailed comparison was not performed, however, due to the lack of test data resulting from a failure of the test article. Some theoretical and experimental aspects such as fin effect and round corner were found to reduce the discrepancy between prediction and test results.
NASA Technical Reports Server (NTRS)
Rutledge, Sharon K.
1999-01-01
Spacecraft in low Earth orbit (LEO) are subjected to many components of the environment, which can cause them to degrade much more rapidly than intended and greatly shorten their functional life. The atomic oxygen, ultraviolet radiation, and cross contamination present in LEO can affect sensitive surfaces such as thermal control paints, multilayer insulation, solar array surfaces, and optical surfaces. The LEO Spacecraft Materials Test (LEO-SMT) program is being conducted to assess the effects of simulated LEO exposure on current spacecraft materials to increase understanding of LEO degradation processes as well as to enable the prediction of in-space performance and durability. Using ground-based simulation facilities to test the durability of materials currently flying in LEO will allow researchers to compare the degradation evidenced in the ground-based facilities with that evidenced on orbit. This will allow refinement of ground laboratory test systems and the development of algorithms to predict the durability and performance of new materials in LEO from ground test results. Accurate predictions based on ground tests could reduce development costs and increase reliability. The wide variety of national and international materials being tested represent materials being functionally used on spacecraft in LEO. The more varied the types of materials tested, the greater the probability that researchers will develop and validate predictive models for spacecraft long-term performance and durability. Organizations that are currently participating in the program are ITT Research Institute (USA), Lockheed Martin (USA), MAP (France), SOREQ Nuclear Research Center (Israel), TNO Institute of Applied Physics (The Netherlands), and UBE Industries, Ltd. (Japan). These represent some of the major suppliers of thermal control and sensor materials currently flying in LEO. The participants provide materials that are exposed to selected levels of atomic oxygen, vacuum ultraviolet radiation, contamination, or synergistic combined environments at the NASA Lewis Research Center. Changes in characteristics that could affect mission performance or lifetime are then measured. These characteristics include changes in mass, solar absorptance, and thermal emittance. The durability of spacecraft materials from U.S. suppliers is then compared with those of materials from other participating countries. Lewis will develop and validate performance and durability prediction models using this ground data and available space data. NASA welcomes the opportunity to consider additional international participants in this program, which should greatly aid future spacecraft designers as they select materials for LEO missions.
Creep fatigue life prediction for engine hot section materials (isotropic)
NASA Technical Reports Server (NTRS)
Moreno, Vito; Nissley, David; Lin, Li-Sen Jim
1985-01-01
The first two years of a two-phase program aimed at improving the high temperature crack initiation life prediction technology for gas turbine hot section components are discussed. In Phase 1 (baseline) effort, low cycle fatigue (LCF) models, using a data base generated for a cast nickel base gas turbine hot section alloy (B1900+Hf), were evaluated for their ability to predict the crack initiation life for relevant creep-fatigue loading conditions and to define data required for determination of model constants. The variables included strain range and rate, mean strain, strain hold times and temperature. None of the models predicted all of the life trends within reasonable data requirements. A Cycle Damage Accumulation (CDA) was therefore developed which follows an exhaustion of material ductility approach. Material ductility is estimated based on observed similarities of deformation structure between fatigue, tensile and creep tests. The cycle damage function is based on total strain range, maximum stress and stress amplitude and includes both time independent and time dependent components. The CDA model accurately predicts all of the trends in creep-fatigue life with loading conditions. In addition, all of the CDA model constants are determinable from rapid cycle, fully reversed fatigue tests and monotonic tensile and/or creep data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rest, J; Gehl, S M
1979-01-01
GRASS-SST and FASTGRASS are mechanistic computer codes for predicting fission-gas behavior in UO/sub 2/-base fuels during steady-state and transient conditions. FASTGRASS was developed in order to satisfy the need for a fast-running alternative to GRASS-SST. Althrough based on GRASS-SST, FASTGRASS is approximately an order of magnitude quicker in execution. The GRASS-SST transient analysis has evolved through comparisons of code predictions with the fission-gas release and physical phenomena that occur during reactor operation and transient direct-electrical-heating (DEH) testing of irradiated light-water reactor fuel. The FASTGRASS calculational procedure is described in this paper, along with models of key physical processes included inmore » both FASTGRASS and GRASS-SST. Predictions of fission-gas release obtained from GRASS-SST and FASTGRASS analyses are compared with experimental observations from a series of DEH tests. The major conclusions is that the computer codes should include an improved model for the evolution of the grain-edge porosity.« less
NASA Astrophysics Data System (ADS)
Arel, Ersin
2012-06-01
The infamous soils of Adapazari, Turkey, that failed extensively during the 46-s long magnitude 7.4 earthquake in 1999 have since been the subject of a research program. Boreholes, piezocone soundings and voluminous laboratory testing have enabled researchers to apply sophisticated methods to determine the soil profiles in the city using the existing database. This paper describes the use of the artificial neural network (ANN) model to predict the complex soil profiles of Adapazari, based on cone penetration test (CPT) results. More than 3236 field CPT readings have been collected from 117 soundings spread over an area of 26 km2. An attempt has been made to develop the ANN model using multilayer perceptrons trained with a feed-forward back-propagation algorithm. The results show that the ANN model is fairly accurate in predicting complex soil profiles. Soil identification using CPT test results has principally been based on the Robertson charts. Applying neural network systems using the chart offers a powerful and rapid route to reliable prediction of the soil profiles.
Testing holography using lattice super-Yang-Mills theory on a 2-torus
NASA Astrophysics Data System (ADS)
Catterall, Simon; Jha, Raghav G.; Schaich, David; Wiseman, Toby
2018-04-01
We consider maximally supersymmetric SU (N ) Yang-Mills theory in Euclidean signature compactified on a flat two-dimensional torus with antiperiodic ("thermal") fermion boundary conditions imposed on one cycle. At large N , holography predicts that this theory describes certain black hole solutions in type IIA and IIB supergravity, and we use lattice gauge theory to test this. Unlike the one-dimensional quantum mechanics case where there is only the dimensionless temperature to vary, here we emphasize there are two more parameters which determine the shape of the flat torus. While a rectangular Euclidean torus yields a thermal interpretation, allowing for skewed tori modifies the holographic dual black hole predictions and results in another direction to test holography. Our lattice calculations are based on a supersymmetric formulation naturally adapted to a particular skewing. Using this we perform simulations up to N =16 with several lattice spacings for both skewed and rectangular tori. We observe the two expected black hole phases with their predicted behavior, with a transition between them that is consistent with the gravity prediction based on the Gregory-Laflamme transition.
NASA Astrophysics Data System (ADS)
Zanino, R.; Bonifetto, R.; Brighenti, A.; Isono, T.; Ozeki, H.; Savoldi, L.
2018-07-01
The ITER toroidal field insert (TFI) coil is a single-layer Nb3Sn solenoid tested in 2016-2017 at the National Institutes for Quantum and Radiological Science and Technology (former JAEA) in Naka, Japan. The TFI, the last in a series of ITER insert coils, was tested in operating conditions relevant for the actual ITER TF coils, inserting it in the borehole of the central solenoid model coil, which provided the background magnetic field. In this paper, we consider the five quench propagation tests that were performed using one or two inductive heaters (IHs) as drivers; out of these, three used just one IH but with increasing delay times, up to 7.5 s, between the quench detection and the TFI current dump. The results of the 4C code prediction of the quench propagation up to the current dump are presented first, based on simulations performed before the tests. We then describe the experimental results, showing good reproducibility. Finally, we compare the 4C code predictions with the measurements, confirming the 4C code capability to accurately predict the quench propagation, and the evolution of total and local voltages, as well as of the hot spot temperature. To the best of our knowledge, such a predictive validation exercise is performed here for the first time for the quench of a Nb3Sn coil. Discrepancies between prediction and measurement are found in the evolution of the jacket temperatures, in the He pressurization and quench acceleration in the late phase of the transient before the dump, as well as in the early evolution of the inlet and outlet He mass flow rate. Based on the lessons learned in the predictive exercise, the model is then refined to try and improve a posteriori (i.e. in interpretive, as opposed to predictive mode) the agreement between simulation and experiment.
NASA Astrophysics Data System (ADS)
Rahaman, Md. Mashiur; Islam, Hafizul; Islam, Md. Tariqul; Khondoker, Md. Reaz Hasan
2017-12-01
Maneuverability and resistance prediction with suitable accuracy is essential for optimum ship design and propulsion power prediction. This paper aims at providing some of the maneuverability characteristics of a Japanese bulk carrier model, JBC in calm water using a computational fluid dynamics solver named SHIP Motion and OpenFOAM. The solvers are based on the Reynolds average Navier-Stokes method (RaNS) and solves structured grid using the Finite Volume Method (FVM). This paper comprises the numerical results of calm water test for the JBC model with available experimental results. The calm water test results include the total drag co-efficient, average sinkage, and trim data. Visualization data for pressure distribution on the hull surface and free water surface have also been included. The paper concludes that the presented solvers predict the resistance and maneuverability characteristics of the bulk carrier with reasonable accuracy utilizing minimum computational resources.
1992-12-21
in preparation). Foundations of artificial intelligence. Cambridge, MA: MIT Press. O’Reilly, R. C. (1991). X3DNet: An X- Based Neural Network ...2.2.3 Trace based protocol analysis 19 2.2A Summary of important data features 21 2.3 Tools related to process model testing 23 2.3.1 Tools for building...algorithm 57 3. Requirements for testing process models using trace based protocol 59 analysis 3.1 Definition of trace based protocol analysis (TBPA) 59
ERIC Educational Resources Information Center
Helwig, Robert; Anderson, Lisbeth; Tindal, Gerald
2002-01-01
An 11-item math concept curriculum-based measure (CBM) was administered to 171 eighth grade students. Scores were correlated with scores from a computer adaptive test designed in conjunction with the state to approximate the official statewide mathematics achievement tests. Correlations for general education students and students with learning…
Clinical history and biologic age predicted falls better than objective functional tests.
Gerdhem, Paul; Ringsberg, Karin A M; Akesson, Kristina; Obrant, Karl J
2005-03-01
Fall risk assessment is important because the consequences, such as a fracture, may be devastating. The objective of this study was to find the test or tests that best predicted falls in a population-based sample of elderly women. The fall-predictive ability of a questionnaire, a subjective estimate of biologic age and objective functional tests (gait, balance [Romberg and sway test], thigh muscle strength, and visual acuity) were compared in 984 randomly selected women, all 75 years of age. A recalled fall was the most important predictor for future falls. Only recalled falls and intake of psycho-active drugs independently predicted future falls. Women with at least five of the most important fall predictors (previous falls, conditions affecting the balance, tendency to fall, intake of psychoactive medication, inability to stand on one leg, high biologic age) had an odds ratio of 11.27 (95% confidence interval 4.61-27.60) for a fall (sensitivity 70%, specificity 79%). The more time-consuming objective functional tests were of limited importance for fall prediction. A simple clinical history, the inability to stand on one leg, and a subjective estimate of biologic age were more important as part of the fall risk assessment.
ERIC Educational Resources Information Center
Makransky, Guido; Havmose, Philip; Vang, Maria Louison; Andersen, Tonny Elmose; Nielsen, Tine
2017-01-01
The aim of this study was to evaluate the predictive validity of a two-step admissions procedure that included a cognitive ability test followed by multiple mini-interviews (MMIs) used to assess non-cognitive skills, compared to grade-based admissions relative to subsequent drop-out rates and academic achievement after one and two years of study.…
A Maximal Graded Exercise Test to Accurately Predict VO2max in 18-65-Year-Old Adults
ERIC Educational Resources Information Center
George, James D.; Bradshaw, Danielle I.; Hyde, Annette; Vehrs, Pat R.; Hager, Ronald L.; Yanowitz, Frank G.
2007-01-01
The purpose of this study was to develop an age-generalized regression model to predict maximal oxygen uptake (VO sub 2 max) based on a maximal treadmill graded exercise test (GXT; George, 1996). Participants (N = 100), ages 18-65 years, reached a maximal level of exertion (mean plus or minus standard deviation [SD]; maximal heart rate [HR sub…
Testing model for prediction system of 1-AU arrival times of CME-associated interplanetary shocks
NASA Astrophysics Data System (ADS)
Ogawa, Tomoya; den, Mitsue; Tanaka, Takashi; Sugihara, Kohta; Takei, Toshifumi; Amo, Hiroyoshi; Watari, Shinichi
We test a model to predict arrival times of interplanetary shock waves associated with coronal mass ejections (CMEs) using a three-dimensional adaptive mesh refinement (AMR) code. The model is used for the prediction system we develop, which has a Web-based user interface and aims at people who is not familiar with operation of computers and numerical simulations or is not researcher. We apply the model to interplanetary CME events. We first choose coronal parameters so that property of background solar wind observed by ACE space craft is reproduced. Then we input CME parameters observed by SOHO/LASCO. Finally we compare the predicted arrival times with observed ones. We describe results of the test and discuss tendency of the model.
Using iron studies to predict HFE mutations in New Zealand: implications for laboratory testing.
O'Toole, Rebecca; Romeril, Kenneth; Bromhead, Collette
2017-04-01
The diagnosis of hereditary haemochromatosis (HH) is not straightforward because symptoms are often absent or non-specific. Biochemical markers of iron-overloading may be affected by other conditions. To measure the correlation between iron studies and HFE genotype to inform evidence-based recommendations for laboratory testing in New Zealand. Results from 2388 patients genotyped for C282Y, H63D and S65C in Wellington, New Zealand from 2007 to 2013 were compared with their biochemical phenotype as quantified by serum ferritin (SF), transferrin saturation (TS), serum iron (SI) and serum transferrin (ST). The predictive power of these markers was evaluated by receiver operator characteristic (ROC) curve analysis, and if a statistically significant association for a variable was seen, sensitivity, specificity and predictive values were calculated. Test ordering patterns showed that 62% of HFE genotyping tests were ordered because of an elevated SF alone and only 11% of these had a C-reactive protein test to rule out an acute phase reaction. The association between SF and significant HFE genotypes SF was low. However, TS values ≥45% predicted HH mutations with the highest sensitivity and specificity. A SF of >1000 µg/L was found in one at-risk patient (C282Y homozygote) who had a TS <45%. Our analysis highlights the need for clear guidelines for investigation of hyperferritinaemia and HH in New Zealand. Using our findings, we developed an evidence-based laboratory testing algorithm based on a TS ≥45%, a SF ≥1000 µg/L and/or a family history of HH which identified all C282Y homozygotes in this study. © 2016 Royal Australasian College of Physicians.
Green, Jasmine; Liem, Gregory Arief D; Martin, Andrew J; Colmar, Susan; Marsh, Herbert W; McInerney, Dennis
2012-10-01
The study tested three theoretically/conceptually hypothesized longitudinal models of academic processes leading to academic performance. Based on a longitudinal sample of 1866 high-school students across two consecutive years of high school (Time 1 and Time 2), the model with the most superior heuristic value demonstrated: (a) academic motivation and self-concept positively predicted attitudes toward school; (b) attitudes toward school positively predicted class participation and homework completion and negatively predicted absenteeism; and (c) class participation and homework completion positively predicted test performance whilst absenteeism negatively predicted test performance. Taken together, these findings provide support for the relevance of the self-system model and, particularly, the importance of examining the dynamic relationships amongst engagement factors of the model. The study highlights implications for educational and psychological theory, measurement, and intervention. Copyright © 2012 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Early Numeracy Indicators: Examining Predictive Utility Across Years and States
ERIC Educational Resources Information Center
Conoyer, Sarah J.; Foegen, Anne; Lembke, Erica S.
2016-01-01
Two studies using similar methods in two states investigated the long-term predictive utility of two single-skill early numeracy Curriculum Based Measures (CBMs) and the degree to which they can adequately predict high-stakes test scores. Data were drawn from kindergarten and first-grade students. State standardized assessment data from the…
Driving and Low Vision: Validity of Assessments for Predicting Performance of Drivers
ERIC Educational Resources Information Center
Strong, J. Graham; Jutai, Jeffrey W.; Russell-Minda, Elizabeth; Evans, Mal
2008-01-01
The authors conducted a systematic review to examine whether vision-related assessments can predict the driving performance of individuals who have low vision. The results indicate that measures of visual field, contrast sensitivity, cognitive and attention-based tests, and driver screening tools have variable utility for predicting real-world…
Denault, Anne-Sophie; Guay, Frédéric
2017-01-01
Participation in extracurricular activities is a promising avenue for enhancing students' school motivation. Using self-determination theory (Deci & Ryan, 2000), the goal of this study was to test a serial multiple mediator model. In this model, students' perceptions of autonomy support from their extracurricular activity leader predicted their activity-based intrinsic and identified regulations. In turn, these regulations predicted their school-based intrinsic and identified regulations during the same school year. Finally, these regulations predicted their school-based intrinsic and identified regulations one year later. A total of 276 youths (54% girls) from disadvantaged neighborhoods were surveyed over two waves of data collection. The proposed mediation model was supported for both types of regulation. These results highlight the generalization effects of motivation from the extracurricular activity context to the school context. Copyright © 2016 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
Experimental and computational prediction of glass transition temperature of drugs.
Alzghoul, Ahmad; Alhalaweh, Amjad; Mahlin, Denny; Bergström, Christel A S
2014-12-22
Glass transition temperature (Tg) is an important inherent property of an amorphous solid material which is usually determined experimentally. In this study, the relation between Tg and melting temperature (Tm) was evaluated using a data set of 71 structurally diverse druglike compounds. Further, in silico models for prediction of Tg were developed based on calculated molecular descriptors and linear (multilinear regression, partial least-squares, principal component regression) and nonlinear (neural network, support vector regression) modeling techniques. The models based on Tm predicted Tg with an RMSE of 19.5 K for the test set. Among the five computational models developed herein the support vector regression gave the best result with RMSE of 18.7 K for the test set using only four chemical descriptors. Hence, two different models that predict Tg of drug-like molecules with high accuracy were developed. If Tm is available, a simple linear regression can be used to predict Tg. However, the results also suggest that support vector regression and calculated molecular descriptors can predict Tg with equal accuracy, already before compound synthesis.
Developing an in silico minimum inhibitory concentration panel test for Klebsiella pneumoniae
Nguyen, Marcus; Brettin, Thomas; Long, S. Wesley; ...
2018-01-11
Here, antimicrobial resistant infections are a serious public health threat worldwide. Whole genome sequencing approaches to rapidly identify pathogens and predict antibiotic resistance phenotypes are becoming more feasible and may offer a way to reduce clinical test turnaround times compared to conventional culture-based methods, and in turn, improve patient outcomes. In this study, we use whole genome sequence data from 1668 clinical isolates of Klebsiella pneumoniae to develop a XGBoost-based machine learning model that accurately predicts minimum inhibitory concentrations (MICs) for 20 antibiotics. The overall accuracy of the model, within ± 1 two-fold dilution factor, is 92%. Individual accuracies aremore » >= 90% for 15/20 antibiotics. We show that the MICs predicted by the model correlate with known antimicrobial resistance genes. Importantly, the genome-wide approach described in this study offers a way to predict MICs for isolates without knowledge of the underlying gene content. This study shows that machine learning can be used to build a complete in silico MIC prediction panel for K. pneumoniae and provides a framework for building MIC prediction models for other pathogenic bacteria.« less
Aerodynamic heating environment definition/thermal protection system selection for the HL-20
NASA Astrophysics Data System (ADS)
Wurster, K. E.; Stone, H. W.
1993-09-01
Definition of the aerothermal environment is critical to any vehicle such as the HL-20 Personnel Launch System that operates within the hypersonic flight regime. Selection of an appropriate thermal protection system design is highly dependent on the accuracy of the heating-environment prediction. It is demonstrated that the entry environment determines the thermal protection system design for this vehicle. The methods used to predict the thermal environment for the HL-20 Personnel Launch System vehicle are described. Comparisons of the engineering solutions with computational fluid dynamic predictions, as well as wind-tunnel test results, show good agreement. The aeroheating predictions over several critical regions of the vehicle, including the stagnation areas of the nose and leading edges, windward centerline and wing surfaces, and leeward surfaces, are discussed. Results of predictions based on the engineering methods found within the MINIVER aerodynamic heating code are used in conjunction with the results of the extensive wind-tunnel tests on this configuration to define a flight thermal environment. Finally, the selection of the thermal protection system based on these predictions and current technology is described.
Developing an in silico minimum inhibitory concentration panel test for Klebsiella pneumoniae
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Marcus; Brettin, Thomas; Long, S. Wesley
Here, antimicrobial resistant infections are a serious public health threat worldwide. Whole genome sequencing approaches to rapidly identify pathogens and predict antibiotic resistance phenotypes are becoming more feasible and may offer a way to reduce clinical test turnaround times compared to conventional culture-based methods, and in turn, improve patient outcomes. In this study, we use whole genome sequence data from 1668 clinical isolates of Klebsiella pneumoniae to develop a XGBoost-based machine learning model that accurately predicts minimum inhibitory concentrations (MICs) for 20 antibiotics. The overall accuracy of the model, within ± 1 two-fold dilution factor, is 92%. Individual accuracies aremore » >= 90% for 15/20 antibiotics. We show that the MICs predicted by the model correlate with known antimicrobial resistance genes. Importantly, the genome-wide approach described in this study offers a way to predict MICs for isolates without knowledge of the underlying gene content. This study shows that machine learning can be used to build a complete in silico MIC prediction panel for K. pneumoniae and provides a framework for building MIC prediction models for other pathogenic bacteria.« less
Fenlon, Caroline; O'Grady, Luke; Doherty, Michael L; Dunnion, John; Shalloo, Laurence; Butler, Stephen T
2017-07-01
Reproductive performance in pasture-based production systems has a fundamentally important effect on economic efficiency. The individual factors affecting the probability of submission and conception are multifaceted and have been extensively researched. The present study analyzed some of these factors in relation to service-level probability of conception in seasonal-calving pasture-based dairy cows to develop a predictive model of conception. Data relating to 2,966 services from 737 cows on 2 research farms were used for model development and data from 9 commercial dairy farms were used for model testing, comprising 4,212 services from 1,471 cows. The data spanned a 15-yr period and originated from seasonal-calving pasture-based dairy herds in Ireland. The calving season for the study herds extended from January to June, with peak calving in February and March. A base mixed-effects logistic regression model was created using a stepwise model-building strategy and incorporated parity, days in milk, interservice interval, calving difficulty, and predicted transmitting abilities for calving interval and milk production traits. To attempt to further improve the predictive capability of the model, the addition of effects that were not statistically significant was considered, resulting in a final model composed of the base model with the inclusion of BCS at service. The models' predictions were evaluated using discrimination to measure their ability to correctly classify positive and negative cases. Precision, recall, F-score, and area under the receiver operating characteristic curve (AUC) were calculated. Calibration tests measured the accuracy of the predicted probabilities. These included tests of overall goodness-of-fit, bias, and calibration error. Both models performed better than using the population average probability of conception. Neither of the models showed high levels of discrimination (base model AUC 0.61, final model AUC 0.62), possibly because of the narrow central range of conception rates in the study herds. The final model was found to reliably predict the probability of conception without bias when evaluated against the full external data set, with a mean absolute calibration error of 2.4%. The chosen model could be used to support a farmer's decision-making and in stochastic simulation of fertility in seasonal-calving pasture-based dairy cows. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Orucevic, Amila; Bell, John L; McNabb, Alison P; Heidel, Robert E
2017-05-01
Oncotype DX (ODX) recurrence score (RS) breast cancer (BC) assay is costly, and performed in only ~1/3 of estrogen receptor (ER)-positive BC patients in the USA. We have now developed a user-friendly nomogram surrogate prediction model for ODX based on a large dataset from the National Cancer Data Base (NCDB) to assist in selecting patients for which further ODX testing may not be necessary and as a surrogate for patients for which ODX testing is not affordable or available. Six clinicopathologic variables of 27,719 ODX-tested ER+/HER2-/lymph node-negative patients with 6-50 mm tumor size captured by the NCDB from 2010 to 2012 were assessed with logistic regression to predict high-risk or low-risk ODXRS test results with TAILORx-trial and commercial cut-off values; 12,763 ODX-tested patients in 2013 were used for external validation. The predictive accuracy of the regression model was yielded using a Receiver Operator Characteristic analysis. Model fit was analyzed by plotting the predicted probabilities against the actual probabilities. A user-friendly calculator version of nomograms is available online at the University of Tennessee Medical Center website (Knoxville, TN). Grade and progesterone receptor status were the highest predictors of both low-risk and high-risk ODXRS, followed by age, tumor size, histologic tumor type and lymph-vascular invasion (C-indexes-.0.85 vs. 0.88 for TAILORx-trial vs. commercial cut-off values, respectively). This is the first study of this scale showing confidently that clinicopathologic variables can be used for prediction of low-risk or high-risk ODXRS using our nomogram models. These novel nomograms will be useful tools to help physicians and patients decide whether further ODX testing is necessary and are excellent surrogates for patients for which ODX testing is not affordable or available.
Makretsov, Nikita; Gilks, C Blake; Alaghehbandan, Reza; Garratt, John; Quenneville, Louise; Mercer, Joel; Palavdzic, Dragana; Torlakovic, Emina E
2011-07-01
External quality assurance and proficiency testing programs for breast cancer predictive biomarkers are based largely on traditional ad hoc design; at present there is no universal consensus on definition of a standard reference value for samples used in external quality assurance programs. To explore reference values for estrogen receptor and progesterone receptor immunohistochemistry in order to develop an evidence-based analytic platform for external quality assurance. There were 31 participating laboratories, 4 of which were previously designated as "expert" laboratories. Each participant tested a tissue microarray slide with 44 breast carcinomas for estrogen receptor and progesterone receptor and submitted it to the Canadian Immunohistochemistry Quality Control Program for analysis. Nuclear staining in 1% or more of the tumor cells was a positive score. Five methods for determining reference values were compared. All reference values showed 100% agreement for estrogen receptor and progesterone receptor scores, when indeterminate results were excluded. Individual laboratory performance (agreement rates, test sensitivity, test specificity, positive predictive value, negative predictive value, and κ value) was very similar for all reference values. Identification of suboptimal performance by all methods was identical for 30 of 31 laboratories. Estrogen receptor assessment of 1 laboratory was discordant: agreement was less than 90% for 3 of 5 reference values and greater than 90% with the use of 2 other reference values. Various reference values provide equivalent laboratory rating. In addition to descriptive feedback, our approach allows calculation of technical test sensitivity and specificity, positive and negative predictive values, agreement rates, and κ values to guide corrective actions.
Zhang, Huiling; Huang, Qingsheng; Bei, Zhendong; Wei, Yanjie; Floudas, Christodoulos A
2016-03-01
In this article, we present COMSAT, a hybrid framework for residue contact prediction of transmembrane (TM) proteins, integrating a support vector machine (SVM) method and a mixed integer linear programming (MILP) method. COMSAT consists of two modules: COMSAT_SVM which is trained mainly on position-specific scoring matrix features, and COMSAT_MILP which is an ab initio method based on optimization models. Contacts predicted by the SVM model are ranked by SVM confidence scores, and a threshold is trained to improve the reliability of the predicted contacts. For TM proteins with no contacts above the threshold, COMSAT_MILP is used. The proposed hybrid contact prediction scheme was tested on two independent TM protein sets based on the contact definition of 14 Å between Cα-Cα atoms. First, using a rigorous leave-one-protein-out cross validation on the training set of 90 TM proteins, an accuracy of 66.8%, a coverage of 12.3%, a specificity of 99.3% and a Matthews' correlation coefficient (MCC) of 0.184 were obtained for residue pairs that are at least six amino acids apart. Second, when tested on a test set of 87 TM proteins, the proposed method showed a prediction accuracy of 64.5%, a coverage of 5.3%, a specificity of 99.4% and a MCC of 0.106. COMSAT shows satisfactory results when compared with 12 other state-of-the-art predictors, and is more robust in terms of prediction accuracy as the length and complexity of TM protein increase. COMSAT is freely accessible at http://hpcc.siat.ac.cn/COMSAT/. © 2016 Wiley Periodicals, Inc.
Real-time sensing of fatigue crack damage for information-based decision and control
NASA Astrophysics Data System (ADS)
Keller, Eric Evans
Information-based decision and control for structures that are subject to failure by fatigue cracking is based on the following notion: Maintenance, usage scheduling, and control parameter tuning can be optimized through real time knowledge of the current state of fatigue crack damage. Additionally, if the material properties of a mechanical structure can be identified within a smaller range, then the remaining life prediction of that structure will be substantially more accurate. Information-based decision systems can rely one physical models, estimation of material properties, exact knowledge of usage history, and sensor data to synthesize an accurate snapshot of the current state of damage and the likely remaining life of a structure under given assumed loading. The work outlined in this thesis is structured to enhance the development of information-based decision and control systems. This is achieved by constructing a test facility for laboratory experiments on real-time damage sensing. This test facility makes use of a methodology that has been formulated for fatigue crack model parameter estimation and significantly improves the quality of predictions of remaining life. Specifically, the thesis focuses on development of an on-line fatigue crack damage sensing and life prediction system that is built upon the disciplines of Systems Sciences and Mechanics of Materials. A major part of the research effort has been expended to design and fabricate a test apparatus which allows: (i) measurement and recording of statistical data for fatigue crack growth in metallic materials via different sensing techniques; and (ii) identification of stochastic model parameters for prediction of fatigue crack damage. To this end, this thesis describes the test apparatus and the associated instrumentation based on four different sensing techniques, namely, traveling optical microscopy, ultrasonic flaw detection, Alternating Current Potential Drop (ACPD), and fiber-optic extensometry-based compliance, for crack length measurements.
Resting-State Functional Connectivity Predicts Cognitive Impairment Related to Alzheimer's Disease.
Lin, Qi; Rosenberg, Monica D; Yoo, Kwangsun; Hsu, Tiffany W; O'Connell, Thomas P; Chun, Marvin M
2018-01-01
Resting-state functional connectivity (rs-FC) is a promising neuromarker for cognitive decline in aging population, based on its ability to reveal functional differences associated with cognitive impairment across individuals, and because rs-fMRI may be less taxing for participants than task-based fMRI or neuropsychological tests. Here, we employ an approach that uses rs-FC to predict the Alzheimer's Disease Assessment Scale (11 items; ADAS11) scores, which measure overall cognitive functioning, in novel individuals. We applied this technique, connectome-based predictive modeling, to a heterogeneous sample of 59 subjects from the Alzheimer's Disease Neuroimaging Initiative, including normal aging, mild cognitive impairment, and AD subjects. First, we built linear regression models to predict ADAS11 scores from rs-FC measured with Pearson's r correlation. The positive network model tested with leave-one-out cross validation (LOOCV) significantly predicted individual differences in cognitive function from rs-FC. In a second analysis, we considered other functional connectivity features, accordance and discordance, which disentangle the correlation and anticorrelation components of activity timecourses between brain areas. Using partial least square regression and LOOCV, we again built models to successfully predict ADAS11 scores in novel individuals. Our study provides promising evidence that rs-FC can reveal cognitive impairment in an aging population, although more development is needed for clinical application.
Novel Approach for Prediction of Localized Necking in Case of Nonlinear Strain Paths
NASA Astrophysics Data System (ADS)
Drotleff, K.; Liewald, M.
2017-09-01
Rising customer expectations regarding design complexity and weight reduction of sheet metal components alongside with further reduced time to market implicate increased demand for process validation using numerical forming simulation. Formability prediction though often is still based on the forming limit diagram first presented in the 1960s. Despite many drawbacks in case of nonlinear strain paths and major advances in research in the recent years, the forming limit curve (FLC) is still one of the most commonly used criteria for assessing formability of sheet metal materials. Especially when forming complex part geometries nonlinear strain paths may occur, which cannot be predicted using the conventional FLC-Concept. In this paper a novel approach for calculation of FLCs for nonlinear strain paths is presented. Combining an interesting approach for prediction of FLC using tensile test data and IFU-FLC-Criterion a model for prediction of localized necking for nonlinear strain paths can be derived. Presented model is purely based on experimental tensile test data making it easy to calibrate for any given material. Resulting prediction of localized necking is validated using an experimental deep drawing specimen made of AA6014 material having a sheet thickness of 1.04 mm. The results are compared to IFU-FLC-Criterion based on data of pre-stretched Nakajima specimen.
Zhou, Wengang; Dickerson, Julie A
2012-01-01
Knowledge of protein subcellular locations can help decipher a protein's biological function. This work proposes new features: sequence-based: Hybrid Amino Acid Pair (HAAP) and two structure-based: Secondary Structural Element Composition (SSEC) and solvent accessibility state frequency. A multi-class Support Vector Machine is developed to predict the locations. Testing on two established data sets yields better prediction accuracies than the best available systems. Comparisons with existing methods show comparable results to ESLPred2. When StruLocPred is applied to the entire Arabidopsis proteome, over 77% of proteins with known locations match the prediction results. An implementation of this system is at http://wgzhou.ece. iastate.edu/StruLocPred/.
A Grammatical Approach to RNA-RNA Interaction Prediction
NASA Astrophysics Data System (ADS)
Kato, Yuki; Akutsu, Tatsuya; Seki, Hiroyuki
2007-11-01
Much attention has been paid to two interacting RNA molecules involved in post-transcriptional control of gene expression. Although there have been a few studies on RNA-RNA interaction prediction based on dynamic programming algorithm, no grammar-based approach has been proposed. The purpose of this paper is to provide a new modeling for RNA-RNA interaction based on multiple context-free grammar (MCFG). We present a polynomial time parsing algorithm for finding the most likely derivation tree for the stochastic version of MCFG, which is applicable to RNA joint secondary structure prediction including kissing hairpin loops. Also, elementary tests on RNA-RNA interaction prediction have shown that the proposed method is comparable to Alkan et al.'s method.
Mapping the Human Toxome by Systems Toxicology
Bouhifd, Mounir; Hogberg, Helena T.; Kleensang, Andre; Maertens, Alexandra; Zhao, Liang; Hartung, Thomas
2014-01-01
Toxicity testing typically involves studying adverse health outcomes in animals subjected to high doses of toxicants with subsequent extrapolation to expected human responses at lower doses. The low-throughput of current toxicity testing approaches (which are largely the same for industrial chemicals, pesticides and drugs) has led to a backlog of more than 80,000 chemicals to which human beings are potentially exposed whose potential toxicity remains largely unknown. Employing new testing strategies that employ the use of predictive, high-throughput cell-based assays (of human origin) to evaluate perturbations in key pathways, referred as pathways of toxicity, and to conduct targeted testing against those pathways, we can begin to greatly accelerate our ability to test the vast “storehouses” of chemical compounds using a rational, risk-based approach to chemical prioritization, and provide test results that are more predictive of human toxicity than current methods. The NIH Transformative Research Grant project Mapping the Human Toxome by Systems Toxicology aims at developing the tools for pathway mapping, annotation and validation as well as the respective knowledge base to share this information. PMID:24443875
Control and prediction of the course of brewery fermentations by gravimetric analysis.
Kosín, P; Savel, J; Broz, A; Sigler, K
2008-01-01
A simple, fast and cheap test suitable for predicting the course of brewery fermentations based on mass analysis is described and its efficiency is evaluated. Compared to commonly used yeast vitality tests, this analysis takes into account wort composition and other factors that influence fermentation performance. It can be used to predict the shape of the fermentation curve in brewery fermentations and in research and development projects concerning yeast vitality, fermentation conditions and wort composition. It can also be a useful tool for homebrewers to control their fermentations.
Hermes, Helen E.; Teutonico, Donato; Preuss, Thomas G.; Schneckener, Sebastian
2018-01-01
The environmental fates of pharmaceuticals and the effects of crop protection products on non-target species are subjects that are undergoing intense review. Since measuring the concentrations and effects of xenobiotics on all affected species under all conceivable scenarios is not feasible, standard laboratory animals such as rabbits are tested, and the observed adverse effects are translated to focal species for environmental risk assessments. In that respect, mathematical modelling is becoming increasingly important for evaluating the consequences of pesticides in untested scenarios. In particular, physiologically based pharmacokinetic/toxicokinetic (PBPK/TK) modelling is a well-established methodology used to predict tissue concentrations based on the absorption, distribution, metabolism and excretion of drugs and toxicants. In the present work, a rabbit PBPK/TK model is developed and evaluated with data available from the literature. The model predictions include scenarios of both intravenous (i.v.) and oral (p.o.) administration of small and large compounds. The presented rabbit PBPK/TK model predicts the pharmacokinetics (Cmax, AUC) of the tested compounds with an average 1.7-fold error. This result indicates a good predictive capacity of the model, which enables its use for risk assessment modelling and simulations. PMID:29561908
Reevaluation of a walleye (Sander vitreus) bioenergetics model
Madenjian, Charles P.; Wang, Chunfang
2013-01-01
Walleye (Sander vitreus) is an important sport fish throughout much of North America, and walleye populations support valuable commercial fisheries in certain lakes as well. Using a corrected algorithm for balancing the energy budget, we reevaluated the performance of the Wisconsin bioenergetics model for walleye in the laboratory. Walleyes were fed rainbow smelt (Osmerus mordax) in four laboratory tanks each day during a 126-day experiment. Feeding rates ranged from 1.4 to 1.7 % of walleye body weight per day. Based on a statistical comparison of bioenergetics model predictions of monthly consumption with observed monthly consumption, we concluded that the bioenergetics model estimated food consumption by walleye without any significant bias. Similarly, based on a statistical comparison of bioenergetics model predictions of weight at the end of the monthly test period with observed weight, we concluded that the bioenergetics model predicted walleye growth without any detectable bias. In addition, the bioenergetics model predictions of cumulative consumption over the 126-day experiment differed fromobserved cumulative consumption by less than 10 %. Although additional laboratory and field testing will be needed to fully evaluate model performance, based on our laboratory results, the Wisconsin bioenergetics model for walleye appears to be providing unbiased predictions of food consumption.
NASA Technical Reports Server (NTRS)
Mcgrath, W. R.; Richards, P. L.; Face, D. W.; Prober, D. E.; Lloyd, F. L.
1988-01-01
A systematic study of the gain and noise in superconductor-insulator-superconductor mixers employing Ta based, Nb based, and Pb-alloy based tunnel junctions was made. These junctions displayed both weak and strong quantum effects at a signal frequency of 33 GHz. The effects of energy gap sharpness and subgap current were investigated and are quantitatively related to mixer performance. Detailed comparisons are made of the mixing results with the predictions of a three-port model approximation to the Tucker theory. Mixer performance was measured with a novel test apparatus which is accurate enough to allow for the first quantitative tests of theoretical noise predictions. It is found that the three-port model of the Tucker theory underestimates the mixer noise temperature by a factor of about 2 for all of the mixers. In addition, predicted values of available mixer gain are in reasonable agreement with experiment when quantum effects are weak. However, as quantum effects become strong, the predicted available gain diverges to infinity, which is in sharp contrast to the experimental results. Predictions of coupled gain do not always show such divergences.
NASA Astrophysics Data System (ADS)
Maljaars, E.; Felici, F.; Blanken, T. C.; Galperti, C.; Sauter, O.; de Baar, M. R.; Carpanese, F.; Goodman, T. P.; Kim, D.; Kim, S. H.; Kong, M.; Mavkov, B.; Merle, A.; Moret, J. M.; Nouailletas, R.; Scheffer, M.; Teplukhina, A. A.; Vu, N. M. T.; The EUROfusion MST1-team; The TCV-team
2017-12-01
The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety factor profile (q-profile) and kinetic plasma parameters such as the plasma beta. This demands to establish reliable profile control routines in presently operational tokamaks. We present a model predictive profile controller that controls the q-profile and plasma beta using power requests to two clusters of gyrotrons and the plasma current request. The performance of the controller is analyzed in both simulation and TCV L-mode discharges where successful tracking of the estimated inverse q-profile as well as plasma beta is demonstrated under uncertain plasma conditions and the presence of disturbances. The controller exploits the knowledge of the time-varying actuator limits in the actuator input calculation itself such that fast transitions between targets are achieved without overshoot. A software environment is employed to prepare and test this and three other profile controllers in parallel in simulations and experiments on TCV. This set of tools includes the rapid plasma transport simulator RAPTOR and various algorithms to reconstruct the plasma equilibrium and plasma profiles by merging the available measurements with model-based predictions. In this work the estimated q-profile is merely based on RAPTOR model predictions due to the absence of internal current density measurements in TCV. These results encourage to further exploit model predictive profile control in experiments on TCV and other (future) tokamaks.
NP-59 test for preoperative localization of primary hyperaldosteronism.
Di Martino, Marcello; García Sanz, Iñigo; Muñoz de Nova, Jose Luis; Marín Campos, Cristina; Martínez Martín, Miguel; Domínguez Gadea, Luis
2017-03-01
Adrenal venous sampling is generally considered the gold standard to identify unilateral hormone production in cases of primary hyperaldosteronism. The aim of this study is to evaluate whether the iodine-131-6-β-iodomethyl-19-norcholesterol (NP-59) test may represent an alternative in selected cases. Patients submitted to laparoscopic adrenalectomy for suspected primary hyperaldosteronism (n = 27) were retrospectively reviewed. When nuclear medicine tests were preoperatively performed, their results were compared with the histopathologic findings and clinical improvement. Nuclear medicine tests were realized in 13 patients. In 11 (84.6%), a planar anterior and posterior NP-59 scintigraphy was performed and a SPECT/TC in two (15.4%). Scintigraphy indicated a preoperative lateralization in 12 out of 13 patients (92.3%). When the value of NP-59 tests was based on pathologic results, it showed a sensitivity of 90.9% and a positive predictive value of 83.3%. When the nuclear medicine test's performance was based on postoperative blood pressure control, both sensitivity and positive predictive value were 91.6%. Nuclear medicine tests represent a useful tool in the preoperative localisation of primary hyperaldosteronism with a high sensitivity and positive predictive value. In patients with contraindications to adrenal venous sampling like contrast allergies, or when it is inconclusive, scintigraphy can represent a useful and non-invasive alternative.
Descatha, A; Dale, A-M; Franzblau, A; Coomes, J; Evanoff, B
2010-02-01
We evaluated the utility of physical examination manoeuvres in the prediction of carpal tunnel syndrome (CTS) in a population-based research study. We studied a cohort of 1108 newly employed workers in several industries. Each worker completed a symptom questionnaire, a structured physical examination and nerve conduction study. For each hand, our CTS case definition required both median nerve conduction abnormality and symptoms classified as "classic" or "probable" on a hand diagram. We calculated the positive predictive values and likelihood ratios for physical examination manoeuvres in subjects with and without symptoms. The prevalence of CTS in our cohort was 1.2% for the right hand and 1.0% for the left hand. The likelihood ratios of a positive test for physical provocative tests ranged from 2.0 to 3.3, and those of a negative test from 0.3 to 0.9. The post-test probability of positive testing was <50% for all strategies tested. Our study found that physical examination, alone or in combination with symptoms, was not predictive of CTS in a working population. We suggest using specific symptoms as a first-level screening tool, and nerve conduction study as a confirmatory test, as a case definition strategy in research settings.
NASA Technical Reports Server (NTRS)
Nyangweso, Emmanuel; Bole, Brian
2014-01-01
Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.
Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L
2017-05-01
Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Improved therapy-success prediction with GSS estimated from clinical HIV-1 sequences.
Pironti, Alejandro; Pfeifer, Nico; Kaiser, Rolf; Walter, Hauke; Lengauer, Thomas
2014-01-01
Rules-based HIV-1 drug-resistance interpretation (DRI) systems disregard many amino-acid positions of the drug's target protein. The aims of this study are (1) the development of a drug-resistance interpretation system that is based on HIV-1 sequences from clinical practice rather than hard-to-get phenotypes, and (2) the assessment of the benefit of taking all available amino-acid positions into account for DRI. A dataset containing 34,934 therapy-naïve and 30,520 drug-exposed HIV-1 pol sequences with treatment history was extracted from the EuResist database and the Los Alamos National Laboratory database. 2,550 therapy-change-episode baseline sequences (TCEB) were assigned to test set A. Test set B contains 1,084 TCEB from the HIVdb TCE repository. Sequences from patients absent in the test sets were used to train three linear support vector machines to produce scores that predict drug exposure pertaining to each of 20 antiretrovirals: the first one uses the full amino-acid sequences (DEfull), the second one only considers IAS drug-resistance positions (DEonlyIAS), and the third one disregards IAS drug-resistance positions (DEnoIAS). For performance comparison, test sets A and B were evaluated with DEfull, DEnoIAS, DEonlyIAS, geno2pheno[resistance], HIVdb, ANRS, HIV-GRADE, and REGA. Clinically-validated cut-offs were used to convert the continuous output of the first four methods into susceptible-intermediate-resistant (SIR) predictions. With each method, a genetic susceptibility score (GSS) was calculated for each therapy episode in each test set by converting the SIR prediction for its compounds to integer: S=2, I=1, and R=0. The GSS were used to predict therapy success as defined by the EuResist standard datum definition. Statistical significance was assessed using a Wilcoxon signed-rank test. A comparison of the therapy-success prediction performances among the different interpretation systems for test set A can be found in Table 1, while those for test set B are found in Figure 1. Therapy-success prediction of first-line therapies with DEnoIAS performed better than DEonlyIAS (p<10-16). Therapy success prediction benefits from the consideration of all available mutations. The increase in performance was largest in first-line therapies with transmitted drug-resistance mutations.
Li, Wen; Zhao, Li-Zhong; Ma, Dong-Wang; Wang, De-Zheng; Shi, Lei; Wang, Hong-Lei; Dong, Mo; Zhang, Shu-Yi; Cao, Lei; Zhang, Wei-Hua; Zhang, Xi-Peng; Zhang, Qing-Huai; Yu, Lin; Qin, Hai; Wang, Xi-Mo; Chen, Sam Li-Sheng
2018-05-01
We aimed to predict colorectal cancer (CRC) based on the demographic features and clinical correlates of personal symptoms and signs from Tianjin community-based CRC screening data.A total of 891,199 residents who were aged 60 to 74 and were screened in 2012 were enrolled. The Lasso logistic regression model was used to identify the predictors for CRC. Predictive validity was assessed by the receiver operating characteristic (ROC) curve. Bootstrapping method was also performed to validate this prediction model.CRC was best predicted by a model that included age, sex, education level, occupations, diarrhea, constipation, colon mucosa and bleeding, gallbladder disease, a stressful life event, family history of CRC, and a positive fecal immunochemical test (FIT). The area under curve (AUC) for the questionnaire with a FIT was 84% (95% CI: 82%-86%), followed by 76% (95% CI: 74%-79%) for a FIT alone, and 73% (95% CI: 71%-76%) for the questionnaire alone. With 500 bootstrap replications, the estimated optimism (<0.005) shows good discrimination in validation of prediction model.A risk prediction model for CRC based on a series of symptoms and signs related to enteric diseases in combination with a FIT was developed from first round of screening. The results of the current study are useful for increasing the awareness of high-risk subjects and for individual-risk-guided invitations or strategies to achieve mass screening for CRC.
Predicting Energy Performance of a Net-Zero Energy Building: A Statistical Approach
Kneifel, Joshua; Webb, David
2016-01-01
Performance-based building requirements have become more prevalent because it gives freedom in building design while still maintaining or exceeding the energy performance required by prescriptive-based requirements. In order to determine if building designs reach target energy efficiency improvements, it is necessary to estimate the energy performance of a building using predictive models and different weather conditions. Physics-based whole building energy simulation modeling is the most common approach. However, these physics-based models include underlying assumptions and require significant amounts of information in order to specify the input parameter values. An alternative approach to test the performance of a building is to develop a statistically derived predictive regression model using post-occupancy data that can accurately predict energy consumption and production based on a few common weather-based factors, thus requiring less information than simulation models. A regression model based on measured data should be able to predict energy performance of a building for a given day as long as the weather conditions are similar to those during the data collection time frame. This article uses data from the National Institute of Standards and Technology (NIST) Net-Zero Energy Residential Test Facility (NZERTF) to develop and validate a regression model to predict the energy performance of the NZERTF using two weather variables aggregated to the daily level, applies the model to estimate the energy performance of hypothetical NZERTFs located in different cities in the Mixed-Humid climate zone, and compares these estimates to the results from already existing EnergyPlus whole building energy simulations. This regression model exhibits agreement with EnergyPlus predictive trends in energy production and net consumption, but differs greatly in energy consumption. The model can be used as a framework for alternative and more complex models based on the experimental data collected from the NZERTF. PMID:27956756
Predicting Energy Performance of a Net-Zero Energy Building: A Statistical Approach.
Kneifel, Joshua; Webb, David
2016-09-01
Performance-based building requirements have become more prevalent because it gives freedom in building design while still maintaining or exceeding the energy performance required by prescriptive-based requirements. In order to determine if building designs reach target energy efficiency improvements, it is necessary to estimate the energy performance of a building using predictive models and different weather conditions. Physics-based whole building energy simulation modeling is the most common approach. However, these physics-based models include underlying assumptions and require significant amounts of information in order to specify the input parameter values. An alternative approach to test the performance of a building is to develop a statistically derived predictive regression model using post-occupancy data that can accurately predict energy consumption and production based on a few common weather-based factors, thus requiring less information than simulation models. A regression model based on measured data should be able to predict energy performance of a building for a given day as long as the weather conditions are similar to those during the data collection time frame. This article uses data from the National Institute of Standards and Technology (NIST) Net-Zero Energy Residential Test Facility (NZERTF) to develop and validate a regression model to predict the energy performance of the NZERTF using two weather variables aggregated to the daily level, applies the model to estimate the energy performance of hypothetical NZERTFs located in different cities in the Mixed-Humid climate zone, and compares these estimates to the results from already existing EnergyPlus whole building energy simulations. This regression model exhibits agreement with EnergyPlus predictive trends in energy production and net consumption, but differs greatly in energy consumption. The model can be used as a framework for alternative and more complex models based on the experimental data collected from the NZERTF.
Hendriksen, Ilse C. E.; Mtove, George; Pedro, Alínia José; Gomes, Ermelinda; Silamut, Kamolrat; Lee, Sue J.; Mwambuli, Abraham; Gesase, Samwel; Reyburn, Hugh; Day, Nicholas P. J.; White, Nicholas J.; von Seidlein, Lorenz
2011-01-01
Background. Rapid diagnostic tests (RDTs) now play an important role in the diagnosis of falciparum malaria in many countries where the disease is endemic. Although these tests have been extensively evaluated in uncomplicated falciparum malaria, reliable data on their performance for diagnosing potentially lethal severe malaria is lacking. Methods. We compared a Plasmodium falciparum histidine-rich-protein2 (PfHRP2)–based RDT and a Plasmodium lactate dehydrogenase (pLDH)–based RDT with routine microscopy of a peripheral blood slide and expert microscopy as a reference standard for the diagnosis of severe malaria in 1898 children who presented with severe febrile illness at 2 centers in Mozambique and Tanzania. Results. The overall sensitivity, specificity, positive predictive value, and negative predictive values of the PfHRP2-based test were 94.0%, 70.9%, 85.4%, and 86.8%, respectively, and for the pLDH-based test, the values were 88.0%, 88.3%, 93.2%, and 80.3%, respectively. At parasite counts <1000 parasites/μL (n = 173), sensitivity of the pLDH-based test was low (45.7%), compared with that of the PfHRP2-based test (69.9%). Both RDTs performed better than did the routine slide reading in a clinical laboratory as assessed in 1 of the centers. Conclusion. The evaluated PfHRP2-based RDT is an acceptable alternative to routine microscopy for diagnosing severe malaria in African children and performed better than did the evaluated pLDH-based RDT. PMID:21467015
Negative HPV screening test predicts low cervical cancer risk better than negative Pap test
Based on a study that included more than 1 million women, investigators at NCI have determined that a negative test for HPV infection compared to a negative Pap test provides greater safety, or assurance, against future risk of cervical cancer.
Sayegh, Philip; Arentoft, Alyssa; Thaler, Nicholas S.; Dean, Andy C.; Thames, April D.
2014-01-01
The current study examined whether self-rated education quality predicts Wide Range Achievement Test-4th Edition (WRAT-4) Word Reading subtest and neurocognitive performance, and aimed to establish this subtest's construct validity as an educational quality measure. In a community-based adult sample (N = 106), we tested whether education quality both increased the prediction of Word Reading scores beyond demographic variables and predicted global neurocognitive functioning after adjusting for WRAT-4. As expected, race/ethnicity and education predicted WRAT-4 reading performance. Hierarchical regression revealed that when including education quality, the amount of WRAT-4's explained variance increased significantly, with race/ethnicity and both education quality and years as significant predictors. Finally, WRAT-4 scores, but not education quality, predicted neurocognitive performance. Results support WRAT-4 Word Reading as a valid proxy measure for education quality and a key predictor of neurocognitive performance. Future research should examine these findings in larger, more diverse samples to determine their robust nature. PMID:25404004
A literature review on fatigue and creep interaction
NASA Technical Reports Server (NTRS)
Chen, W. C.
1978-01-01
Life-time prediction methods, which are based on a number of empirical and phenomenological relationships, are presented. Three aspects are reviewed: effects of testing parameters on high temperature fatigue, life-time prediction, and high temperature fatigue crack growth.
A method of predicting the energy-absorption capability of composite subfloor beams
NASA Technical Reports Server (NTRS)
Farley, Gary L.
1987-01-01
A simple method of predicting the energy-absorption capability of composite subfloor beam structure was developed. The method is based upon the weighted sum of the energy-absorption capability of constituent elements of a subfloor beam. An empirical data base of energy absorption results from circular and square cross section tube specimens were used in the prediction capability. The procedure is applicable to a wide range of subfloor beam structure. The procedure was demonstrated on three subfloor beam concepts. Agreement between test and prediction was within seven percent for all three cases.
Validity of one-repetition maximum predictive equations in men with spinal cord injury.
Ribeiro Neto, F; Guanais, P; Dornelas, E; Coutinho, A C B; Costa, R R G
2017-10-01
Cross-sectional study. The study aimed (a) to test the cross-validation of current one-repetition maximum (1RM) predictive equations in men with spinal cord injury (SCI); (b) to compare the current 1RM predictive equations to a newly developed equation based on the 4- to 12-repetition maximum test (4-12RM). SARAH Rehabilitation Hospital Network, Brasilia, Brazil. Forty-five men aged 28.0 years with SCI between C6 and L2 causing complete motor impairment were enrolled in the study. Volunteers were tested, in a random order, in 1RM test or 4-12RM with 2-3 interval days. Multiple regression analysis was used to generate an equation for predicting 1RM. There were no significant differences between 1RM test and the current predictive equations. ICC values were significant and were classified as excellent for all current predictive equations. The predictive equation of Lombardi presented the best Bland-Altman results (0.5 kg and 12.8 kg for mean difference and interval range around the differences, respectively). The two created equation models for 1RM demonstrated the same and a high adjusted R 2 (0.971, P<0.01), but different SEE of measured 1RM (2.88 kg or 5.4% and 2.90 kg or 5.5%). All 1RM predictive equations are accurate to assess individuals with SCI at the bench press exercise. However, the predictive equation of Lombardi presented the best associated cross-validity results. A specific 1RM prediction equation was also elaborated for individuals with SCI. The created equation should be tested in order to verify whether it presents better accuracy than the current ones.
Procedures for adjusting regional regression models of urban-runoff quality using local data
Hoos, A.B.; Sisolak, J.K.
1993-01-01
Statistical operations termed model-adjustment procedures (MAP?s) can be used to incorporate local data into existing regression models to improve the prediction of urban-runoff quality. Each MAP is a form of regression analysis in which the local data base is used as a calibration data set. Regression coefficients are determined from the local data base, and the resulting `adjusted? regression models can then be used to predict storm-runoff quality at unmonitored sites. The response variable in the regression analyses is the observed load or mean concentration of a constituent in storm runoff for a single storm. The set of explanatory variables used in the regression analyses is different for each MAP, but always includes the predicted value of load or mean concentration from a regional regression model. The four MAP?s examined in this study were: single-factor regression against the regional model prediction, P, (termed MAP-lF-P), regression against P,, (termed MAP-R-P), regression against P, and additional local variables (termed MAP-R-P+nV), and a weighted combination of P, and a local-regression prediction (termed MAP-W). The procedures were tested by means of split-sample analysis, using data from three cities included in the Nationwide Urban Runoff Program: Denver, Colorado; Bellevue, Washington; and Knoxville, Tennessee. The MAP that provided the greatest predictive accuracy for the verification data set differed among the three test data bases and among model types (MAP-W for Denver and Knoxville, MAP-lF-P and MAP-R-P for Bellevue load models, and MAP-R-P+nV for Bellevue concentration models) and, in many cases, was not clearly indicated by the values of standard error of estimate for the calibration data set. A scheme to guide MAP selection, based on exploratory data analysis of the calibration data set, is presented and tested. The MAP?s were tested for sensitivity to the size of a calibration data set. As expected, predictive accuracy of all MAP?s for the verification data set decreased as the calibration data-set size decreased, but predictive accuracy was not as sensitive for the MAP?s as it was for the local regression models.
Oosting, Ellen; Hoogeboom, Thomas J; Appelman-de Vries, Suzan A; Swets, Adam; Dronkers, Jaap J; van Meeteren, Nico L U
2016-01-01
The aim of this study was to evaluate the value of conventional factors, the Risk Assessment and Predictor Tool (RAPT) and performance-based functional tests as predictors of delayed recovery after total hip arthroplasty (THA). A prospective cohort study in a regional hospital in the Netherlands with 315 patients was attending for THA in 2012. The dependent variable recovery of function was assessed with the Modified Iowa Levels of Assistance scale. Delayed recovery was defined as taking more than 3 days to walk independently. Independent variables were age, sex, BMI, Charnley score, RAPT score and scores for four performance-based tests [2-minute walk test, timed up and go test (TUG), 10-meter walking test (10 mW) and hand grip strength]. Regression analysis with all variables identified older age (>70 years), Charnley score C, slow walking speed (10 mW >10.0 s) and poor functional mobility (TUG >10.5 s) as the best predictors of delayed recovery of function. This model (AUC 0.85, 95% CI 0.79-0.91) performed better than a model with conventional factors and RAPT scores, and significantly better (p = 0.04) than a model with only conventional factors (AUC 0.81, 95% CI 0.74-0.87). The combination of performance-based tests and conventional factors predicted inpatient functional recovery after THA. Two simple functional performance-based tests have a significant added value to a more conventional screening with age and comorbidities to predict recovery of functioning immediately after total hip surgery. Patients over 70 years old, with comorbidities, with a TUG score >10.5 s and a walking speed >1.0 m/s are at risk for delayed recovery of functioning. Those high risk patients need an accurate discharge plan and could benefit from targeted pre- and postoperative therapeutic exercise programs.
Behavioral Change Theories Can Inform the Prediction of Young Adults' Adoption of a Plant-Based Diet
ERIC Educational Resources Information Center
Wyker, Brett A.; Davison, Kirsten K.
2010-01-01
Objective: Drawing on the Theory of Planned Behavior (TPB) and the Transtheoretical Model (TTM), this study (1) examines links between stages of change for following a plant-based diet (PBD) and consuming more fruits and vegetables (FV); (2) tests an integrated theoretical model predicting intention to follow a PBD; and (3) identifies associated…
Smirnova, Lena; Hogberg, Helena T.; Leist, Marcel; Hartung, Thomas
2016-01-01
Summary In recent years neurodevelopmental problems in children have increased at a rate that suggests lifestyle factors and chemical exposures as likely contributors. When environmental chemicals contribute to neurodevelopmental disorders developmental neurotoxicity (DNT) becomes an enormous concern. But how can it be tackled? Current animal test-based guidelines are prohibitively expensive, at $1.4 million per substance, while their predictivity for human health effects may be limited, and mechanistic data that would help species extrapolation are not available. A broader screening for substances of concern requires a reliable testing strategy, applicable to larger numbers of substances, and sufficiently predictive to warrant further testing. This review discusses the evidence for possible contributions of environmental chemicals to DNT, limitations of the current test paradigm, emerging concepts and technologies pertinent to in vitro DNT testing and assay evaluation, as well as the prospect of a paradigm shift based on 21st century technologies. PMID:24687333
Data base for the prediction of inlet external drag
NASA Technical Reports Server (NTRS)
Mcmillan, O. J.; Perkins, E. W.; Perkins, S. C., Jr.
1980-01-01
Results are presented from a study to define and evaluate the data base for predicting an airframe/propulsion system interference effect shown to be of considerable importance, inlet external drag. The study is focused on supersonic tactical aircraft with highly integrated jet propulsion systems, although some information is included for supersonic strategic aircraft and for transport aircraft designed for high subsonic or low supersonic cruise. The data base for inlet external drag is considered to consist of the theoretical and empirical prediction methods as well as the experimental data identified in an extensive literature search. The state of the art in the subsonic and transonic speed regimes is evaluated. The experimental data base is organized and presented in a series of tables in which the test article, the quantities measured and the ranges of test conditions covered are described for each set of data; in this way, the breadth of coverage and gaps in the existing experimental data are evident. Prediction methods are categorized by method of solution, type of inlet and speed range to which they apply, major features are given, and their accuracy is assessed by means of comparison to experimental data.
Use of belowground growing degree days to predict rooting of dormant hardwood cuttings of Populus
R.S., Jr. Zalesny; E.O. Bauer; D.E. Riemenschneider
2004-01-01
Planting Populus cuttings based on calendar days neglects soil temperature extremes and does not promote rooting based on specific genotypes. Our objectives were to: 1) test the biological efficacy of a thermal index based on belowground growing degree days (GDD) across the growing period, 2) test for interactions between belowground GDD and clones,...
Thorenz, Ute R; Kundel, Michael; Müller, Lars; Hoffmann, Thorsten
2012-11-01
In this work, we describe a simple diffusion capillary device for the generation of various organic test gases. Using a set of basic equations the output rate of the test gas devices can easily be predicted only based on the molecular formula and the boiling point of the compounds of interest. Since these parameters are easily accessible for a large number of potential analytes, even for those compounds which are typically not listed in physico-chemical handbooks or internet databases, the adjustment of the test gas source to the concentration range required for the individual analytical application is straightforward. The agreement of the predicted and measured values is shown to be valid for different groups of chemicals, such as halocarbons, alkanes, alkenes, and aromatic compounds and for different dimensions of the diffusion capillaries. The limits of the predictability of the output rates are explored and observed to result in an underprediction of the output rates when very thin capillaries are used. It is demonstrated that pressure variations are responsible for the observed deviation of the output rates. To overcome the influence of pressure variations and at the same time to establish a suitable test gas source for highly volatile compounds, also the usability of permeation sources is explored, for example for the generation of molecular bromine test gases.
Microcomputer-based tests for repeated-measures: Metric properties and predictive validities
NASA Technical Reports Server (NTRS)
Kennedy, Robert S.; Baltzley, Dennis R.; Dunlap, William P.; Wilkes, Robert L.; Kuntz, Lois-Ann
1989-01-01
A menu of psychomotor and mental acuity tests were refined. Field applications of such a battery are, for example, a study of the effects of toxic agents or exotic environments on performance readiness, or the determination of fitness for duty. The key requirement of these tasks is that they be suitable for repeated-measures applications, and so questions of stability and reliability are a continuing, central focus of this work. After the initial (practice) session, seven replications of 14 microcomputer-based performance tests (32 measures) were completed by 37 subjects. Each test in the battery had previously been shown to stabilize in less than five 90-second administrations and to possess retest reliabilities greater than r = 0.707 for three minutes of testing. However, all the tests had never been administered together as a battery and they had never been self-administered. In order to provide predictive validity for intelligence measurement, the Wechsler Adult Intelligence Scale-Revised and the Wonderlic Personnel Test were obtained on the same subjects.
White, H; Racine, J
2001-01-01
We propose tests for individual and joint irrelevance of network inputs. Such tests can be used to determine whether an input or group of inputs "belong" in a particular model, thus permitting valid statistical inference based on estimated feedforward neural-network models. The approaches employ well-known statistical resampling techniques. We conduct a small Monte Carlo experiment showing that our tests have reasonable level and power behavior, and we apply our methods to examine whether there are predictable regularities in foreign exchange rates. We find that exchange rates do appear to contain information that is exploitable for enhanced point prediction, but the nature of the predictive relations evolves through time.
Fang, Lingzhao; Sahana, Goutam; Ma, Peipei; Su, Guosheng; Yu, Ying; Zhang, Shengli; Lund, Mogens Sandø; Sørensen, Peter
2017-05-12
A better understanding of the genetic architecture of complex traits can contribute to improve genomic prediction. We hypothesized that genomic variants associated with mastitis and milk production traits in dairy cattle are enriched in hepatic transcriptomic regions that are responsive to intra-mammary infection (IMI). Genomic markers [e.g. single nucleotide polymorphisms (SNPs)] from those regions, if included, may improve the predictive ability of a genomic model. We applied a genomic feature best linear unbiased prediction model (GFBLUP) to implement the above strategy by considering the hepatic transcriptomic regions responsive to IMI as genomic features. GFBLUP, an extension of GBLUP, includes a separate genomic effect of SNPs within a genomic feature, and allows differential weighting of the individual marker relationships in the prediction equation. Since GFBLUP is computationally intensive, we investigated whether a SNP set test could be a computationally fast way to preselect predictive genomic features. The SNP set test assesses the association between a genomic feature and a trait based on single-SNP genome-wide association studies. We applied these two approaches to mastitis and milk production traits (milk, fat and protein yield) in Holstein (HOL, n = 5056) and Jersey (JER, n = 1231) cattle. We observed that a majority of genomic features were enriched in genomic variants that were associated with mastitis and milk production traits. Compared to GBLUP, the accuracy of genomic prediction with GFBLUP was marginally improved (3.2 to 3.9%) in within-breed prediction. The highest increase (164.4%) in prediction accuracy was observed in across-breed prediction. The significance of genomic features based on the SNP set test were correlated with changes in prediction accuracy of GFBLUP (P < 0.05). GFBLUP provides a framework for integrating multiple layers of biological knowledge to provide novel insights into the biological basis of complex traits, and to improve the accuracy of genomic prediction. The SNP set test might be used as a first-step to improve GFBLUP models. Approaches like GFBLUP and SNP set test will become increasingly useful, as the functional annotations of genomes keep accumulating for a range of species and traits.
Dilatancy Criteria for Salt Cavern Design: A Comparison Between Stress- and Strain-Based Approaches
NASA Astrophysics Data System (ADS)
Labaune, P.; Rouabhi, A.; Tijani, M.; Blanco-Martín, L.; You, T.
2018-02-01
This paper presents a new approach for salt cavern design, based on the use of the onset of dilatancy as a design threshold. In the proposed approach, a rheological model that includes dilatancy at the constitutive level is developed, and a strain-based dilatancy criterion is defined. As compared to classical design methods that consist in simulating cavern behavior through creep laws (fitted on long-term tests) and then using a criterion (derived from short-terms tests or experience) to determine the stability of the excavation, the proposed approach is consistent both with short- and long-term conditions. The new strain-based dilatancy criterion is compared to a stress-based dilatancy criterion through numerical simulations of salt caverns under cyclic loading conditions. The dilatancy zones predicted by the strain-based criterion are larger than the ones predicted by the stress-based criteria, which is conservative yet constructive for design purposes.
A Path to an Instructional Science: Data-Generated vs. Postulated Models
ERIC Educational Resources Information Center
Gropper, George L.
2016-01-01
Psychological testing can serve as a prototype on which to base a data-generated approach to instructional design. In "testing batteries" tests are used to predict achievement. In the proposed approach batteries of prescriptions would be used to produce achievement. In creating "test batteries" tests are selected for their…
Liu, Yan; Li, Xiaohong; Johnson, Margaret; Smith, Collette; Kamarulzaman, Adeeba bte; Montaner, Julio; Mounzer, Karam; Saag, Michael; Cahn, Pedro; Cesar, Carina; Krolewiecki, Alejandro; Sanne, Ian; Montaner, Luis J.
2012-01-01
Background Global programs of anti-HIV treatment depend on sustained laboratory capacity to assess treatment initiation thresholds and treatment response over time. Currently, there is no valid alternative to CD4 count testing for monitoring immunologic responses to treatment, but laboratory cost and capacity limit access to CD4 testing in resource-constrained settings. Thus, methods to prioritize patients for CD4 count testing could improve treatment monitoring by optimizing resource allocation. Methods and Findings Using a prospective cohort of HIV-infected patients (n = 1,956) monitored upon antiretroviral therapy initiation in seven clinical sites with distinct geographical and socio-economic settings, we retrospectively apply a novel prediction-based classification (PBC) modeling method. The model uses repeatedly measured biomarkers (white blood cell count and lymphocyte percent) to predict CD4+ T cell outcome through first-stage modeling and subsequent classification based on clinically relevant thresholds (CD4+ T cell count of 200 or 350 cells/µl). The algorithm correctly classified 90% (cross-validation estimate = 91.5%, standard deviation [SD] = 4.5%) of CD4 count measurements <200 cells/µl in the first year of follow-up; if laboratory testing is applied only to patients predicted to be below the 200-cells/µl threshold, we estimate a potential savings of 54.3% (SD = 4.2%) in CD4 testing capacity. A capacity savings of 34% (SD = 3.9%) is predicted using a CD4 threshold of 350 cells/µl. Similar results were obtained over the 3 y of follow-up available (n = 619). Limitations include a need for future economic healthcare outcome analysis, a need for assessment of extensibility beyond the 3-y observation time, and the need to assign a false positive threshold. Conclusions Our results support the use of PBC modeling as a triage point at the laboratory, lessening the need for laboratory-based CD4+ T cell count testing; implementation of this tool could help optimize the use of laboratory resources, directing CD4 testing towards higher-risk patients. However, further prospective studies and economic analyses are needed to demonstrate that the PBC model can be effectively applied in clinical settings. Please see later in the article for the Editors' Summary PMID:22529752
Hamashima, Chisato; Sasazuki, Shizuka; Inoue, Manami; Tsugane, Shoichiro
2017-03-09
Chronic Helicobacter pylori infection plays a central role in the development of gastric cancer as shown by biological and epidemiological studies. The H. pylori antibody and serum pepsinogen (PG) tests have been anticipated to predict gastric cancer development. We determined the predictive sensitivity and specificity of gastric cancer development using these tests. Receiver operating characteristic analysis was performed, and areas under the curve were estimated. The predictive sensitivity and specificity of gastric cancer development were compared among single tests and combined methods using serum pepsinogen and H. pylori antibody tests. From a large-scale population-based cohort of over 100,000 subjects followed between 1990 and 2004, 497 gastric cancer subjects and 497 matched healthy controls were chosen. The predictive sensitivity and specificity were low in all single tests and combination methods. The highest predictive sensitivity and specificity were obtained for the serum PG I/II ratio. The optimal PG I/II cut-off values were 2.5 and 3.0. At a PG I/II cut-off value of 3.0, the sensitivity was 86.9% and the specificity was 39.8%. Even if three biomarkers were combined, the sensitivity was 97.2% and the specificity was 21.1% when the cut-off values were 3.0 for PG I/II, 70 ng/mL for PG I, and 10.0 U/mL for H. pylori antibody. The predictive accuracy of gastric cancer development was low with the serum pepsinogen and H. pylori antibody tests even if these tests were combined. To adopt these biomarkers for gastric cancer screening, a high specificity is required. When these tests are adopted for gastric cancer screening, they should be carefully interpreted with a clear understanding of their limitations.
Interpreting IgE sensitization tests in food allergy.
Chokshi, Niti Y; Sicherer, Scott H
2016-01-01
Food allergies are increasing in prevalence, and with it, IgE testing to foods is becoming more commonplace. Food-specific IgE tests, including serum assays and prick skin tests, are sensitive for detecting the presence of food-specific IgE (sensitization), but specificity for predicting clinical allergy is limited. Therefore, positive tests are generally not, in isolation, diagnostic of clinical disease. However, rationale test selection and interpretation, based on clinical history and understanding of food allergy epidemiology and pathophysiology, makes these tests invaluable. Additionally, there exist highly predictive test cutoff values for common allergens in atopic children. Newer testing methodologies, such as component resolved diagnostics, are promising for increasing the utility of testing. This review highlights the use of IgE serum tests in the diagnosis of food allergy.
AlzhCPI: A knowledge base for predicting chemical-protein interactions towards Alzheimer's disease.
Fang, Jiansong; Wang, Ling; Li, Yecheng; Lian, Wenwen; Pang, Xiaocong; Wang, Hong; Yuan, Dongsheng; Wang, Qi; Liu, Ai-Lin; Du, Guan-Hua
2017-01-01
Alzheimer's disease (AD) is a complicated progressive neurodegeneration disorder. To confront AD, scientists are searching for multi-target-directed ligands (MTDLs) to delay disease progression. The in silico prediction of chemical-protein interactions (CPI) can accelerate target identification and drug discovery. Previously, we developed 100 binary classifiers to predict the CPI for 25 key targets against AD using the multi-target quantitative structure-activity relationship (mt-QSAR) method. In this investigation, we aimed to apply the mt-QSAR method to enlarge the model library to predict CPI towards AD. Another 104 binary classifiers were further constructed to predict the CPI for 26 preclinical AD targets based on the naive Bayesian (NB) and recursive partitioning (RP) algorithms. The internal 5-fold cross-validation and external test set validation were applied to evaluate the performance of the training sets and test set, respectively. The area under the receiver operating characteristic curve (ROC) for the test sets ranged from 0.629 to 1.0, with an average of 0.903. In addition, we developed a web server named AlzhCPI to integrate the comprehensive information of approximately 204 binary classifiers, which has potential applications in network pharmacology and drug repositioning. AlzhCPI is available online at http://rcidm.org/AlzhCPI/index.html. To illustrate the applicability of AlzhCPI, the developed system was employed for the systems pharmacology-based investigation of shichangpu against AD to enhance the understanding of the mechanisms of action of shichangpu from a holistic perspective.
United3D: a protein model quality assessment program that uses two consensus based methods.
Terashi, Genki; Oosawa, Makoto; Nakamura, Yuuki; Kanou, Kazuhiko; Takeda-Shitaka, Mayuko
2012-01-01
In protein structure prediction, such as template-based modeling and free modeling (ab initio modeling), the step that assesses the quality of protein models is very important. We have developed a model quality assessment (QA) program United3D that uses an optimized clustering method and a simple Cα atom contact-based potential. United3D automatically estimates the quality scores (Qscore) of predicted protein models that are highly correlated with the actual quality (GDT_TS). The performance of United3D was tested in the ninth Critical Assessment of protein Structure Prediction (CASP9) experiment. In CASP9, United3D showed the lowest average loss of GDT_TS (5.3) among the QA methods participated in CASP9. This result indicates that the performance of United3D to identify the high quality models from the models predicted by CASP9 servers on 116 targets was best among the QA methods that were tested in CASP9. United3D also produced high average Pearson correlation coefficients (0.93) and acceptable Kendall rank correlation coefficients (0.68) between the Qscore and GDT_TS. This performance was competitive with the other top ranked QA methods that were tested in CASP9. These results indicate that United3D is a useful tool for selecting high quality models from many candidate model structures provided by various modeling methods. United3D will improve the accuracy of protein structure prediction.
Rajgaria, R.; Wei, Y.; Floudas, C. A.
2010-01-01
An integer linear optimization model is presented to predict residue contacts in β, α + β, and α/β proteins. The total energy of a protein is expressed as sum of a Cα – Cα distance dependent contact energy contribution and a hydrophobic contribution. The model selects contacts that assign lowest energy to the protein structure while satisfying a set of constraints that are included to enforce certain physically observed topological information. A new method based on hydrophobicity is proposed to find the β-sheet alignments. These β-sheet alignments are used as constraints for contacts between residues of β-sheets. This model was tested on three independent protein test sets and CASP8 test proteins consisting of β, α + β, α/β proteins and was found to perform very well. The average accuracy of the predictions (separated by at least six residues) was approximately 61%. The average true positive and false positive distances were also calculated for each of the test sets and they are 7.58 Å and 15.88 Å, respectively. Residue contact prediction can be directly used to facilitate the protein tertiary structure prediction. This proposed residue contact prediction model is incorporated into the first principles protein tertiary structure prediction approach, ASTRO-FOLD. The effectiveness of the contact prediction model was further demonstrated by the improvement in the quality of the protein structure ensemble generated using the predicted residue contacts for a test set of 10 proteins. PMID:20225257
2015-11-06
Predator pilot vacancies. The purpose of this study was to evaluate computer-based intelligence and neuropsychological testing on training...high-risk, high-demand occupation. 15. SUBJECT TERMS Remotely piloted aircraft, RPA, neuropsychological screening, intelligence testing , computer...based testing , Predator, MQ-1 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES 20 19a. NAME OF
An intelligent system with EMG-based joint angle estimation for telemanipulation.
Suryanarayanan, S; Reddy, N P; Gupta, V
1996-01-01
Bio-control of telemanipulators is being researched as an alternate control strategy. This study investigates the use of surface EMG from the biceps to predict joint angle during flexion of the arm that can be used to control an anthropomorphic telemanipulator. An intelligent system based on neural networks and fuzzy logic has been developed to use the processed surface EMG signal and predict the joint angle. The system has been tested on various angles of flexion-extension of the arm and at several speeds of flexion-extension. Preliminary results show the RMS error between the predicted angle and the actual angle to be less than 3% during training and less than 15% during testing. The technique of direct bio-control using EMG has the potential as an interface for telemanipulation applications.
A microstructurally based model of solder joints under conditions of thermomechanical fatigue
NASA Astrophysics Data System (ADS)
Frear, D. R.; Burchett, S. N.; Rashid, M. M.
The thermomechanical fatigue failure of solder joints is increasingly becoming an important reliability issue. We present two computational methodologies that have been developed to predict the behavior of near eutectic Sn-Pb solder joints under fatigue conditions that are based on metallurgical tests as fundamental input for constitutive relations. The two-phase model mathematically predicts the heterogeneous coarsening behavior of near eutectic Sn-Pb solder. The finite element simulations from this model agree well with experimental thermomechanical fatigue tests. The simulations show that the presence of an initial heterogeneity in the solder microstructure could significantly degrade the fatigue lifetime. The single phase model is a computational technique that was developed to predict solder joint behavior using materials data for constitutive relation constants that could be determined through straightforward metallurgical experiments. A shear/torsion test sample was developed to impose strain in two different orientations. Materials constants were derived from these tests and the results showed an adequate fit to experimental results. The single-phase model could be very useful for conditions where microstructural evolution is not a dominant factor in fatigue.
Huo, Dezheng; Senie, Ruby T.; Daly, Mary; Buys, Saundra S.; Cummings, Shelly; Ogutha, Jacqueline; Hope, Kisha; Olopade, Olufunmilayo I.
2009-01-01
Purpose BRCAPRO, a BRCA mutation carrier prediction model, was developed on the basis of studies in individuals of Ashkenazi Jewish and European ancestry. We evaluated the performance of the BRCAPRO model among clinic-based minority families. We also assessed the clinical utility of mutation status of probands (the first individual tested in a family) in the recommendation of BRCA mutation testing for other at-risk family members. Patients and Methods A total of 292 minority families with at least one member who was tested for BRCA mutations were identified through the Breast Cancer Family Registry and the University of Chicago. Using the BRCAPRO model, the predicted likelihood of carrying BRCA mutations was generated. Area under the receiver operating characteristic curves (AUCs) were calculated. Results There were 104 African American, 130 Hispanic, 37 Asian-American, and 21 other minority families. The AUC was 0.748 (95% CI, 0.672 to 0.823) for all minorities combined. There was a statistically nonsignificant trend for BRCAPRO to perform better in Hispanic families than in other minority families. After taking into account the mutation status of probands, BRCAPRO performance in additional tested family members was improved: the AUC increased from 0.760 to 0.902. Conclusion The findings support the use of BRCAPRO in pretest BRCA mutation prediction among minority families in clinical settings, but there is room for improvement in ethnic groups other than Hispanics. Knowledge of the mutation status of the proband provides additional predictive value, which may guide genetic counselors in recommending BRCA testing of additional relatives when a proband has tested negative. PMID:19188678
Case-based statistical learning applied to SPECT image classification
NASA Astrophysics Data System (ADS)
Górriz, Juan M.; Ramírez, Javier; Illán, I. A.; Martínez-Murcia, Francisco J.; Segovia, Fermín.; Salas-Gonzalez, Diego; Ortiz, A.
2017-03-01
Statistical learning and decision theory play a key role in many areas of science and engineering. Some examples include time series regression and prediction, optical character recognition, signal detection in communications or biomedical applications for diagnosis and prognosis. This paper deals with the topic of learning from biomedical image data in the classification problem. In a typical scenario we have a training set that is employed to fit a prediction model or learner and a testing set on which the learner is applied to in order to predict the outcome for new unseen patterns. Both processes are usually completely separated to avoid over-fitting and due to the fact that, in practice, the unseen new objects (testing set) have unknown outcomes. However, the outcome yields one of a discrete set of values, i.e. the binary diagnosis problem. Thus, assumptions on these outcome values could be established to obtain the most likely prediction model at the training stage, that could improve the overall classification accuracy on the testing set, or keep its performance at least at the level of the selected statistical classifier. In this sense, a novel case-based learning (c-learning) procedure is proposed which combines hypothesis testing from a discrete set of expected outcomes and a cross-validated classification stage.
Drug Target Mining and Analysis of the Chinese Tree Shrew for Pharmacological Testing
Liu, Jie; Lee, Wen-hui; Zhang, Yun
2014-01-01
The discovery of new drugs requires the development of improved animal models for drug testing. The Chinese tree shrew is considered to be a realistic candidate model. To assess the potential of the Chinese tree shrew for pharmacological testing, we performed drug target prediction and analysis on genomic and transcriptomic scales. Using our pipeline, 3,482 proteins were predicted to be drug targets. Of these predicted targets, 446 and 1,049 proteins with the highest rank and total scores, respectively, included homologs of targets for cancer chemotherapy, depression, age-related decline and cardiovascular disease. Based on comparative analyses, more than half of drug target proteins identified from the tree shrew genome were shown to be higher similarity to human targets than in the mouse. Target validation also demonstrated that the constitutive expression of the proteinase-activated receptors of tree shrew platelets is similar to that of human platelets but differs from that of mouse platelets. We developed an effective pipeline and search strategy for drug target prediction and the evaluation of model-based target identification for drug testing. This work provides useful information for future studies of the Chinese tree shrew as a source of novel targets for drug discovery research. PMID:25105297
Hsiao, Pei-Chi; Yu, Wan-Hui; Lee, Shih-Chieh; Chen, Mei-Hsiang; Hsieh, Ching-Lin
2018-06-14
The responsiveness and predictive validity of the Tablet-based Symbol Digit Modalities Test (T-SDMT) are unknown, which limits the utility of the T-SDMT in both clinical and research settings. The purpose of this study was to examine the responsiveness and predictive validity of the T-SDMT in inpatients with stroke. A follow-up, repeated-assessments design. One rehabilitation unit at a local medical center. A total of 50 inpatients receiving rehabilitation completed T-SDMT assessments at admission to and discharge from a rehabilitation ward. The median follow-up period was 14 days. The Barthel index (BI) was assessed at discharge and was used as the criterion of the predictive validity. The mean changes in the T-SDMT scores between admission and discharge were statistically significant (paired t-test = 3.46, p = 0.001). The T-SDMT scores showed a nearly moderate standardized response mean (0.49). A moderate association (Pearson's r = 0.47) was found between the scores of the T-SDMT at admission and those of the BI at discharge, indicating good predictive validity of the T-SDMT. Our results support the responsiveness and predictive validity of the T-SDMT in patients with stroke receiving rehabilitation in hospitals. This study provides empirical evidence supporting the use of the T-SDMT as an outcome measure for assessing processingspeed in inpatients with stroke. The scores of the T-SDMT could be used to predict basic activities of daily living function in inpatients with stroke.
A nonlinear CDM based damage growth law for ductile materials
NASA Astrophysics Data System (ADS)
Gautam, Abhinav; Priya Ajit, K.; Sarkar, Prabir Kumar
2018-02-01
A nonlinear ductile damage growth criterion is proposed based on continuum damage mechanics (CDM) approach. The model is derived in the framework of thermodynamically consistent CDM assuming damage to be isotropic. In this study, the damage dissipation potential is also derived to be a function of varying strain hardening exponent in addition to damage strain energy release rate density. Uniaxial tensile tests and load-unload-cyclic tensile tests for AISI 1020 steel, AISI 1030 steel and Al 2024 aluminum alloy are considered for the determination of their respective damage variable D and other parameters required for the model(s). The experimental results are very closely predicted, with a deviation of 0%-3%, by the proposed model for each of the materials. The model is also tested with predictabilities of damage growth by other models in the literature. Present model detects the state of damage quantitatively at any level of plastic strain and uses simpler material tests to find the parameters of the model. So, it should be useful in metal forming industries to assess the damage growth for the desired deformation level a priori. The superiority of the new model is clarified by the deviations in the predictability of test results by other models.
NASA Astrophysics Data System (ADS)
Colen, Charles Raymond, Jr.
There have been numerous studies with ultrasonic nondestructive testing and wood fiber composites. The problem of the study was to ascertain whether ultrasonic nondestructive testing can be used in place of destructive testing to obtain the modulus of elasticity (MOE) of the wood/agricultural material with comparable results. The uniqueness of this research is that it addressed the type of content (cornstalks and switchgrass) being used with the wood fibers and the type of adhesives (soybean-based) associated with the production of these composite materials. Two research questions were addressed in the study. The major objective was to determine if one can predict the destructive test MOE value based on the nondestructive test MOE value. The population of the study was wood/agricultural fiberboards made from wood fibers, cornstalks, and switchgrass bonded together with soybean-based, urea-formaldehyde, and phenol-formaldehyde adhesives. Correlational analysis was used to determine if there was a relationship between the two tests. Regression analysis was performed to determine a prediction equation for the destructive test MOE value. Data were collected on both procedures using ultrasonic nondestructing testing and 3-point destructive testing. The results produced a simple linear regression model for this study which was adequate in the prediction of destructive MOE values if the nondestructive MOE value is known. An approximation very close to the entire error in the model equation was explained from the destructive test MOE values for the composites. The nondestructive MOE values used to produce a linear regression model explained 83% of the variability in the destructive test MOE values. The study also showed that, for the particular destructive test values obtained with the equipment used, the model associated with the study is as good as it could be due to the variability in the results from the destructive tests. In this study, an ultrasonic signal was used to determine the MOE values on nondestructive tests. Future research studies could use the same or other hardboards to examine how the resins affect the ultrasonic signal.
2012-01-01
Background The best sites for biopsy-based tests to evaluate H. pylori infection in gastritis with atrophy are not well known. This study aimed to evaluate the site and sensitivity of biopsy-based tests in terms of degree of gastritis with atrophy. Methods One hundred and sixty-four (164) uninvestigated dyspepsia patients were enrolled. Biopsy-based tests (i.e., culture, histology Giemsa stain and rapid urease test) and non-invasive tests (anti-H. pylori IgG) were performed. The gold standard of H. pylori infection was defined according to previous criteria. The sensitivity, specificity, positive predictive rate and negative predictive rate of biopsy-based tests at the gastric antrum and body were calculated in terms of degree of gastritis with atrophy. Results The prevalence rate of H. pylori infection in the 164 patients was 63.4%. Gastritis with atrophy was significantly higher at the antrum than at the body (76% vs. 31%; p<0.001). The sensitivity of biopsy-based test decreased when the degree of gastritis with atrophy increased regardless of biopsy site (for normal, mild, moderate, and severe gastritis with atrophy, the sensitivity of histology Giemsa stain was 100%, 100%, 88%, and 66%, respectively, and 100%, 97%, 91%, and 66%, respectively, for rapid urease test). In moderate to severe antrum or body gastritis with atrophy, additional corpus biopsy resulted in increased sensitivity to 16.67% compare to single antrum biopsy. Conclusions In moderate to severe gastritis with atrophy, biopsy-based test should include the corpus for avoiding false negative results. PMID:23272897
Srinivasulu, Yerukala Sathipati; Wang, Jyun-Rong; Hsu, Kai-Ti; Tsai, Ming-Ju; Charoenkwan, Phasit; Huang, Wen-Lin; Huang, Hui-Ling; Ho, Shinn-Ying
2015-01-01
Protein-protein interactions (PPIs) are involved in various biological processes, and underlying mechanism of the interactions plays a crucial role in therapeutics and protein engineering. Most machine learning approaches have been developed for predicting the binding affinity of protein-protein complexes based on structure and functional information. This work aims to predict the binding affinity of heterodimeric protein complexes from sequences only. This work proposes a support vector machine (SVM) based binding affinity classifier, called SVM-BAC, to classify heterodimeric protein complexes based on the prediction of their binding affinity. SVM-BAC identified 14 of 580 sequence descriptors (physicochemical, energetic and conformational properties of the 20 amino acids) to classify 216 heterodimeric protein complexes into low and high binding affinity. SVM-BAC yielded the training accuracy, sensitivity, specificity, AUC and test accuracy of 85.80%, 0.89, 0.83, 0.86 and 83.33%, respectively, better than existing machine learning algorithms. The 14 features and support vector regression were further used to estimate the binding affinities (Pkd) of 200 heterodimeric protein complexes. Prediction performance of a Jackknife test was the correlation coefficient of 0.34 and mean absolute error of 1.4. We further analyze three informative physicochemical properties according to their contribution to prediction performance. Results reveal that the following properties are effective in predicting the binding affinity of heterodimeric protein complexes: apparent partition energy based on buried molar fractions, relations between chemical structure and biological activity in principal component analysis IV, and normalized frequency of beta turn. The proposed sequence-based prediction method SVM-BAC uses an optimal feature selection method to identify 14 informative features to classify and predict binding affinity of heterodimeric protein complexes. The characterization analysis revealed that the average numbers of beta turns and hydrogen bonds at protein-protein interfaces in high binding affinity complexes are more than those in low binding affinity complexes.
2015-01-01
Background Protein-protein interactions (PPIs) are involved in various biological processes, and underlying mechanism of the interactions plays a crucial role in therapeutics and protein engineering. Most machine learning approaches have been developed for predicting the binding affinity of protein-protein complexes based on structure and functional information. This work aims to predict the binding affinity of heterodimeric protein complexes from sequences only. Results This work proposes a support vector machine (SVM) based binding affinity classifier, called SVM-BAC, to classify heterodimeric protein complexes based on the prediction of their binding affinity. SVM-BAC identified 14 of 580 sequence descriptors (physicochemical, energetic and conformational properties of the 20 amino acids) to classify 216 heterodimeric protein complexes into low and high binding affinity. SVM-BAC yielded the training accuracy, sensitivity, specificity, AUC and test accuracy of 85.80%, 0.89, 0.83, 0.86 and 83.33%, respectively, better than existing machine learning algorithms. The 14 features and support vector regression were further used to estimate the binding affinities (Pkd) of 200 heterodimeric protein complexes. Prediction performance of a Jackknife test was the correlation coefficient of 0.34 and mean absolute error of 1.4. We further analyze three informative physicochemical properties according to their contribution to prediction performance. Results reveal that the following properties are effective in predicting the binding affinity of heterodimeric protein complexes: apparent partition energy based on buried molar fractions, relations between chemical structure and biological activity in principal component analysis IV, and normalized frequency of beta turn. Conclusions The proposed sequence-based prediction method SVM-BAC uses an optimal feature selection method to identify 14 informative features to classify and predict binding affinity of heterodimeric protein complexes. The characterization analysis revealed that the average numbers of beta turns and hydrogen bonds at protein-protein interfaces in high binding affinity complexes are more than those in low binding affinity complexes. PMID:26681483
Definition and Formulation of Scientific Prediction and Its Role in Inquiry-Based Laboratories
ERIC Educational Resources Information Center
Mauldin, Robert F.
2011-01-01
The formulation of a scientific prediction by students in college-level laboratories is proposed. This activity will develop the students' ability to apply abstract concepts via deductive reasoning. For instances in which a hypothesis will be tested by an experiment, students should develop a prediction that states what sort of experimental…
Predictability of gypsy moth defoliation in central hardwoods: a validation study
David E. Fosbroke; Ray R., Jr. Hicks
1993-01-01
A model for predicting gypsy moth defoliation in central hardwood forests based on stand characteristics was evaluated following a 5-year outbreak in Pennsylvania and Maryland. Study area stand characteristics were similar to those of the areas used to develop the model. Comparisons are made between model predictive capability in two physiographic provinces. The tested...
Kollmeier, Birger; Schädler, Marc René; Warzybok, Anna; Meyer, Bernd T; Brand, Thomas
2016-09-07
To characterize the individual patient's hearing impairment as obtained with the matrix sentence recognition test, a simulation Framework for Auditory Discrimination Experiments (FADE) is extended here using the Attenuation and Distortion (A+D) approach by Plomp as a blueprint for setting the individual processing parameters. FADE has been shown to predict the outcome of both speech recognition tests and psychoacoustic experiments based on simulations using an automatic speech recognition system requiring only few assumptions. It builds on the closed-set matrix sentence recognition test which is advantageous for testing individual speech recognition in a way comparable across languages. Individual predictions of speech recognition thresholds in stationary and in fluctuating noise were derived using the audiogram and an estimate of the internal level uncertainty for modeling the individual Plomp curves fitted to the data with the Attenuation (A-) and Distortion (D-) parameters of the Plomp approach. The "typical" audiogram shapes from Bisgaard et al with or without a "typical" level uncertainty and the individual data were used for individual predictions. As a result, the individualization of the level uncertainty was found to be more important than the exact shape of the individual audiogram to accurately model the outcome of the German Matrix test in stationary or fluctuating noise for listeners with hearing impairment. The prediction accuracy of the individualized approach also outperforms the (modified) Speech Intelligibility Index approach which is based on the individual threshold data only. © The Author(s) 2016.
Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun
2017-11-01
In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Pull out strength calculator for pedicle screws using a surrogate ensemble approach.
Varghese, Vicky; Ramu, Palaniappan; Krishnan, Venkatesh; Saravana Kumar, Gurunathan
2016-12-01
Pedicle screw instrumentation is widely used in the treatment of spinal disorders and deformities. Currently, the surgeon decides the holding power of instrumentation based on the perioperative feeling which is subjective in nature. The objective of the paper is to develop a surrogate model which will predict the pullout strength of pedicle screw based on density, insertion angle, insertion depth and reinsertion. A Taguchi's orthogonal array was used to design an experiment to find the factors effecting pullout strength of pedicle screw. The pullout studies were carried using polyaxial pedicle screw on rigid polyurethane foam block according to American society for testing of materials (ASTM F543). Analysis of variance (ANOVA) and Tukey's honestly significant difference multiple comparison tests were done to find factor effect. Based on the experimental results, surrogate models based on Krigging, polynomial response surface and radial basis function were developed for predicting the pullout strength for different combination of factors. An ensemble of these surrogates based on weighted average surrogate model was also evaluated for prediction. Density, insertion depth, insertion angle and reinsertion have a significant effect (p <0.05) on pullout strength of pedicle screw. Weighted average surrogate performed the best in predicting the pull out strength amongst the surrogate models considered in this study and acted as insurance against bad prediction. A predictive model for pullout strength of pedicle screw was developed using experimental values and surrogate models. This can be used in pre-surgical planning and decision support system for spine surgeon. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
RRCRank: a fusion method using rank strategy for residue-residue contact prediction.
Jing, Xiaoyang; Dong, Qiwen; Lu, Ruqian
2017-09-02
In structural biology area, protein residue-residue contacts play a crucial role in protein structure prediction. Some researchers have found that the predicted residue-residue contacts could effectively constrain the conformational search space, which is significant for de novo protein structure prediction. In the last few decades, related researchers have developed various methods to predict residue-residue contacts, especially, significant performance has been achieved by using fusion methods in recent years. In this work, a novel fusion method based on rank strategy has been proposed to predict contacts. Unlike the traditional regression or classification strategies, the contact prediction task is regarded as a ranking task. First, two kinds of features are extracted from correlated mutations methods and ensemble machine-learning classifiers, and then the proposed method uses the learning-to-rank algorithm to predict contact probability of each residue pair. First, we perform two benchmark tests for the proposed fusion method (RRCRank) on CASP11 dataset and CASP12 dataset respectively. The test results show that the RRCRank method outperforms other well-developed methods, especially for medium and short range contacts. Second, in order to verify the superiority of ranking strategy, we predict contacts by using the traditional regression and classification strategies based on the same features as ranking strategy. Compared with these two traditional strategies, the proposed ranking strategy shows better performance for three contact types, in particular for long range contacts. Third, the proposed RRCRank has been compared with several state-of-the-art methods in CASP11 and CASP12. The results show that the RRCRank could achieve comparable prediction precisions and is better than three methods in most assessment metrics. The learning-to-rank algorithm is introduced to develop a novel rank-based method for the residue-residue contact prediction of proteins, which achieves state-of-the-art performance based on the extensive assessment.
Compilation of reinforced carbon-carbon transatlantic abort landing arc jet test results
NASA Technical Reports Server (NTRS)
Milhoan, James D.; Pham, Vuong T.; Yuen, Eric H.
1993-01-01
This document consists of the entire test database generated to support the Reinforced Carbon-Carbon Transatlantic Abort Landing Study. RCC components used for orbiter nose cap and wing leading edge thermal protection were originally designed to have a multi-mission entry capability of 2800 F. Increased orbiter range capability required a predicted increase in excess of 3300 F. Three test series were conducted. Test series #1 used ENKA-based RCC specimens coated with silicon carbide, treated with tetraethyl orthosilicate, sealed with Type A surface enhancement, and tested at 3000-3400 F with surface pressure of 60-101 psf. Series #2 used ENKA- or AVTEX-based RCC, with and without silicon carbide, Type A or double Type AA surface enhancement, all impregnated with TEOS, and at temperatures from 1440-3350 F with pressures from 100-350 psf. Series #3 tested ENKA-based RCC, with and without silicon carbide coating. No specimens were treated with TEOS or sealed with Type A. Surface temperatures ranged from 2690-3440 F and pressures ranged from 313-400 psf. These combined test results provided the database for establishing RCC material single-mission-limit temperature and developing surface recession correlations used to predict mass loss for abort conditions.
Critical predicted no effect concentrations (PNECs) should not be based on a single toxicity test.
Chapman, Peter M; Elphick, James R
2015-05-01
Predicted no-effect concentrations (PNECs), which represent the concentration of a substance below which an unacceptable effect most likely will not occur, are widely used for risk assessment and in environmental policy and regulation. They are typically based on single-species laboratory toxicity tests; often, a single test result for the most sensitive endpoints drives the derivation of a PNEC. In the present study, the authors provide a case study emphasizing the importance of determining the reliability of those most sensitive endpoints. Five 21-d Daphnia magna toxicity tests conducted using the same procedures by 2 laboratories gave 20% inhibitory concentration responses to a specific ionic composition of total dissolved solids that varied from 684 mg/L to more than 1510 mg/L. The concentration-response curve was shallow; thus, these differences could have been attributable to chance alone. The authors strongly recommend that the most sensitive endpoints that determine PNECs not be based on a single toxicity test result but rather on the geometric mean of at least 3 test results to adequately assess and bound test variability, especially when the concentration-response curve is shallow. © 2015 SETAC.
Chen, Guangchao; Li, Xuehua; Chen, Jingwen; Zhang, Ya-Nan; Peijnenburg, Willie J G M
2014-12-01
Biodegradation is the principal environmental dissipation process of chemicals. As such, it is a dominant factor determining the persistence and fate of organic chemicals in the environment, and is therefore of critical importance to chemical management and regulation. In the present study, the authors developed in silico methods assessing biodegradability based on a large heterogeneous set of 825 organic compounds, using the techniques of the C4.5 decision tree, the functional inner regression tree, and logistic regression. External validation was subsequently carried out by 2 independent test sets of 777 and 27 chemicals. As a result, the functional inner regression tree exhibited the best predictability with predictive accuracies of 81.5% and 81.0%, respectively, on the training set (825 chemicals) and test set I (777 chemicals). Performance of the developed models on the 2 test sets was subsequently compared with that of the Estimation Program Interface (EPI) Suite Biowin 5 and Biowin 6 models, which also showed a better predictability of the functional inner regression tree model. The model built in the present study exhibits a reasonable predictability compared with existing models while possessing a transparent algorithm. Interpretation of the mechanisms of biodegradation was also carried out based on the models developed. © 2014 SETAC.
Vorobjev, Yury N; Scheraga, Harold A; Vila, Jorge A
2018-02-01
A computational method, to predict the pKa values of the ionizable residues Asp, Glu, His, Tyr, and Lys of proteins, is presented here. Calculation of the electrostatic free-energy of the proteins is based on an efficient version of a continuum dielectric electrostatic model. The conformational flexibility of the protein is taken into account by carrying out molecular dynamics simulations of 10 ns in implicit water. The accuracy of the proposed method of calculation of pKa values is estimated from a test set of experimental pKa data for 297 ionizable residues from 34 proteins. The pKa-prediction test shows that, on average, 57, 86, and 95% of all predictions have an error lower than 0.5, 1.0, and 1.5 pKa units, respectively. This work contributes to our general understanding of the importance of protein flexibility for an accurate computation of pKa, providing critical insight about the significance of the multiple neutral states of acid and histidine residues for pKa-prediction, and may spur significant progress in our effort to develop a fast and accurate electrostatic-based method for pKa-predictions of proteins as a function of pH.
NASA Astrophysics Data System (ADS)
Wang, Fuzeng; Zhao, Jun; Zhu, Ningbo
2016-11-01
The flow behavior of Ti-6Al-4V alloy was studied by automated ball indentation (ABI) tests in a wide range of temperatures (293, 493, 693, and 873 K) and strain rates (10-6, 10-5, and 10-4 s-1). Based on the experimental true stress-plastic strain data derived from the ABI tests, the Johnson-Cook (JC), Khan-Huang-Liang (KHL) and modified Zerilli-Armstrong (ZA) constitutive models, as well as artificial neural network (ANN) methods, were employed to predict the flow behavior of Ti-6Al-4V. A comparative study was made on the reliability of the four models, and their predictability was evaluated in terms of correlation coefficient ( R) and mean absolute percentage error. It is found that the flow stresses of Ti-6Al-4V alloy are more sensitive to temperature than strain rate under current experimental conditions. The predicted flow stresses obtained from JC model and KHL model show much better agreement with the experimental results than modified ZA model. Moreover, the ANN model is much more efficient and shows a higher accuracy in predicting the flow behavior of Ti-6Al-4V alloy than the constitutive equations.
Samy, Abdallah M; Annajar, Badereddin B; Dokhan, Mostafa Ramadhan; Boussaa, Samia; Peterson, A Townsend
2016-02-01
Cutaneous leishmaniasis ranks among the tropical diseases least known and most neglected in Libya. World Health Organization reports recognized associations of Phlebotomus papatasi, Psammomys obesus, and Meriones spp., with transmission of zoonotic cutaneous leishmaniasis (ZCL; caused by Leishmania major) across Libya. Here, we map risk of ZCL infection based on occurrence records of L. major, P. papatasi, and four potential animal reservoirs (Meriones libycus, Meriones shawi, Psammomys obesus, and Gerbillus gerbillus). Ecological niche models identified limited risk areas for ZCL across the northern coast of the country; most species associated with ZCL transmission were confined to this same region, but some had ranges extending to central Libya. All ENM predictions were significant based on partial ROC tests. As a further evaluation of L. major ENM predictions, we compared predictions with 98 additional independent records provided by the Libyan National Centre for Disease Control (NCDC); all of these records fell inside the belt predicted as suitable for ZCL. We tested ecological niche similarity among vector, parasite, and reservoir species and could not reject any null hypotheses of niche similarity. Finally, we tested among possible combinations of vector and reservoir that could predict all recent human ZCL cases reported by NCDC; only three combinations could anticipate the distribution of human cases across the country.
Samy, Abdallah M.; Annajar, Badereddin B.; Dokhan, Mostafa Ramadhan; Boussaa, Samia; Peterson, A. Townsend
2016-01-01
Abstract Cutaneous leishmaniasis ranks among the tropical diseases least known and most neglected in Libya. World Health Organization reports recognized associations of Phlebotomus papatasi, Psammomys obesus, and Meriones spp., with transmission of zoonotic cutaneous leishmaniasis (ZCL; caused by Leishmania major) across Libya. Here, we map risk of ZCL infection based on occurrence records of L. major, P. papatasi, and four potential animal reservoirs (Meriones libycus, Meriones shawi, Psammomys obesus, and Gerbillus gerbillus). Ecological niche models identified limited risk areas for ZCL across the northern coast of the country; most species associated with ZCL transmission were confined to this same region, but some had ranges extending to central Libya. All ENM predictions were significant based on partial ROC tests. As a further evaluation of L. major ENM predictions, we compared predictions with 98 additional independent records provided by the Libyan National Centre for Disease Control (NCDC); all of these records fell inside the belt predicted as suitable for ZCL. We tested ecological niche similarity among vector, parasite, and reservoir species and could not reject any null hypotheses of niche similarity. Finally, we tested among possible combinations of vector and reservoir that could predict all recent human ZCL cases reported by NCDC; only three combinations could anticipate the distribution of human cases across the country. PMID:26863317
Long Duration Exposure Facility (LDEF) structural verification test report
NASA Technical Reports Server (NTRS)
Jones, T. C.; Lucy, M. H.; Shearer, R. L.
1983-01-01
Structural load tests on the Long Duration Exposure Facility's (LDEF) primary structure were conducted. These tests had three purposes: (1) demonstrate structural adequacy of the assembled LDEF primary structure when subjected to anticipated flight loads; (2) verify analytical models and methods used in loads and stress analysis; and (3) perform tests to comply with the Space Transportation System (STS) requirements. Test loads were based on predicted limit loads which consider all flight events. Good agreement is shown between predicted and observed load, strain, and deflection data. Test data show that the LDEF structure was subjected to 1.2 times limit load to meet the STS requirements. The structural adequacy of the LDEF is demonstrated.
NASA Technical Reports Server (NTRS)
Park, Sang C.; Carnahan, Timothy M.; Cohen, Lester M.; Congedo, Cherie B.; Eisenhower, Michael J.; Ousley, Wes; Weaver, Andrew; Yang, Kan
2017-01-01
The JWST Optical Telescope Element (OTE) assembly is the largest optically stable infrared-optimized telescope currently being manufactured and assembled, and is scheduled for launch in 2018. The JWST OTE, including the 18 segment primary mirror, secondary mirror, and the Aft Optics Subsystem (AOS) are designed to be passively cooled and operate near 45K. These optical elements are supported by a complex composite backplane structure. As a part of the structural distortion model validation efforts, a series of tests are planned during the cryogenic vacuum test of the fully integrated flight hardware at NASA JSC Chamber A. The successful ends to the thermal-distortion phases are heavily dependent on the accurate temperature knowledge of the OTE structural members. However, the current temperature sensor allocations during the cryo-vac test may not have sufficient fidelity to provide accurate knowledge of the temperature distributions within the composite structure. A method based on an inverse distance relationship among the sensors and thermal model nodes was developed to improve the thermal data provided for the nanometer scale WaveFront Error (WFE) predictions. The Linear Distance Weighted Interpolation (LDWI) method was developed to augment the thermal model predictions based on the sparse sensor information. This paper will encompass the development of the LDWI method using the test data from the earlier pathfinder cryo-vac tests, and the results of the notional and as tested WFE predictions from the structural finite element model cases to characterize the accuracies of this LDWI method.
A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM
Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei
2018-01-01
Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942
NASA Technical Reports Server (NTRS)
Rule, William Keith
1991-01-01
A computer program called BALLIST that is intended to be a design tool for engineers is described. BALLlST empirically predicts the bumper thickness required to prevent perforation of the Space Station pressure wall by a projectile (such as orbital debris) as a function of the projectile's velocity. 'Ballistic' limit curves (bumper thickness vs. projectile velocity) are calculated and are displayed on the screen as well as being stored in an ASCII file. A Whipple style of spacecraft wall configuration is assumed. The predictions are based on a database of impact test results. NASA/Marshall Space Flight Center currently has the capability to generate such test results. Numerical simulation results of impact conditions that can not be tested (high velocities or large particles) can also be used for predictions.
Empirical Observations on the Sensitivity of Hot Cathode Ionization Type Vacuum Gages
NASA Technical Reports Server (NTRS)
Summers, R. L.
1969-01-01
A study of empirical methods of predicting tile relative sensitivities of hot cathode ionization gages is presented. Using previously published gage sensitivities, several rules for predicting relative sensitivity are tested. The relative sensitivity to different gases is shown to be invariant with gage type, in the linear range of gage operation. The total ionization cross section, molecular and molar polarizability, and refractive index are demonstrated to be useful parameters for predicting relative gage sensitivity. Using data from the literature, the probable error of predictions of relative gage sensitivity based on these molecular properties is found to be about 10 percent. A comprehensive table of predicted relative sensitivities, based on empirical methods, is presented.
Crundall, David; Kroll, Victoria
2018-05-18
Can hazard perception testing be useful for the emergency services? Previous research has found emergency response drivers' (ERDs) to perform better than controls, however these studies used clips of normal driving. In contrast, the current study filmed footage from a fire-appliance on blue-light training runs through Nottinghamshire, and endeavoured to discriminate between different groups of EDRs based on experience and collision risk. Thirty clips were selected to create two variants of the hazard perception test: a traditional push-button test requiring speeded-responses to hazards, and a prediction test that occludes at hazard onset and provides four possible outcomes for participants to choose between. Three groups of fire-appliance drivers (novices, low-risk experienced and high-risk experienced), and age-matched controls undertook both tests. The hazard perception test only discriminated between controls and all FA drivers, whereas the hazard prediction test was more sensitive, discriminating between high and low-risk experienced fire appliance drivers. Eye movement analyses suggest that the low-risk drivers were better at prioritising the hazardous precursors, leading to better predictive accuracy. These results pave the way for future assessment and training tools to supplement emergency response driver training, while supporting the growing literature that identifies hazard prediction as a more robust measure of driver safety than traditional hazard perception tests. Copyright © 2018 Elsevier Ltd. All rights reserved.
Single-Column Modeling, GCM Parameterizations and Atmospheric Radiation Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Somerville, R.C.J.; Iacobellis, S.F.
2005-03-18
Our overall goal is identical to that of the Atmospheric Radiation Measurement (ARM) Program: the development of new and improved parameterizations of cloud-radiation effects and related processes, using ARM data at all three ARM sites, and the implementation and testing of these parameterizations in global and regional models. To test recently developed prognostic parameterizations based on detailed cloud microphysics, we have first compared single-column model (SCM) output with ARM observations at the Southern Great Plains (SGP), North Slope of Alaska (NSA) and Topical Western Pacific (TWP) sites. We focus on the predicted cloud amounts and on a suite of radiativemore » quantities strongly dependent on clouds, such as downwelling surface shortwave radiation. Our results demonstrate the superiority of parameterizations based on comprehensive treatments of cloud microphysics and cloud-radiative interactions. At the SGP and NSA sites, the SCM results simulate the ARM measurements well and are demonstrably more realistic than typical parameterizations found in conventional operational forecasting models. At the TWP site, the model performance depends strongly on details of the scheme, and the results of our diagnostic tests suggest ways to develop improved parameterizations better suited to simulating cloud-radiation interactions in the tropics generally. These advances have made it possible to take the next step and build on this progress, by incorporating our parameterization schemes in state-of-the-art 3D atmospheric models, and diagnosing and evaluating the results using independent data. Because the improved cloud-radiation results have been obtained largely via implementing detailed and physically comprehensive cloud microphysics, we anticipate that improved predictions of hydrologic cycle components, and hence of precipitation, may also be achievable. We are currently testing the performance of our ARM-based parameterizations in state-of-the--art global and regional models. One fruitful strategy for evaluating advances in parameterizations has turned out to be using short-range numerical weather prediction as a test-bed within which to implement and improve parameterizations for modeling and predicting climate variability. The global models we have used to date are the CAM atmospheric component of the National Center for Atmospheric Research (NCAR) CCSM climate model as well as the National Centers for Environmental Prediction (NCEP) numerical weather prediction model, thus allowing testing in both climate simulation and numerical weather prediction modes. We present detailed results of these tests, demonstrating the sensitivity of model performance to changes in parameterizations.« less
Papini, Gabriele; Bonomi, Alberto G; Stut, Wim; Kraal, Jos J; Kemps, Hareld M C; Sartor, Francesco
2017-01-01
Cardiorespiratory fitness (CRF) provides important diagnostic and prognostic information. It is measured directly via laboratory maximal testing or indirectly via submaximal protocols making use of predictor parameters such as submaximal [Formula: see text], heart rate, workload, and perceived exertion. We have established an innovative methodology, which can provide CRF prediction based only on body motion during a periodic movement. Thirty healthy subjects (40% females, 31.3 ± 7.8 yrs, 25.1 ± 3.2 BMI) and eighteen male coronary artery disease (CAD) (56.6 ± 7.4 yrs, 28.7 ± 4.0 BMI) patients performed a [Formula: see text] test on a cycle ergometer as well as a 45 second squatting protocol at a fixed tempo (80 bpm). A tri-axial accelerometer was used to monitor movements during the squat exercise test. Three regression models were developed to predict CRF based on subject characteristics and a new accelerometer-derived feature describing motion decay. For each model, the Pearson correlation coefficient and the root mean squared error percentage were calculated using the leave-one-subject-out cross-validation method (rcv, RMSEcv). The model built with all healthy individuals' data showed an rcv = 0.68 and an RMSEcv = 16.7%. The CRF prediction improved when only healthy individuals with normal to lower fitness (CRF<40 ml/min/kg) were included, showing an rcv = 0.91 and RMSEcv = 8.7%. Finally, our accelerometry-based CRF prediction CAD patients, the majority of whom taking β-blockers, still showed high accuracy (rcv = 0.91; RMSEcv = 9.6%). In conclusion, motion decay and subject characteristics could be used to predict CRF in healthy people as well as in CAD patients taking β-blockers, accurately. This method could represent a valid alternative for patients taking β-blockers, but needs to be further validated in a larger population.
Predicting Treatment Response in Social Anxiety Disorder From Functional Magnetic Resonance Imaging
Doehrmann, Oliver; Ghosh, Satrajit S.; Polli, Frida E.; Reynolds, Gretchen O.; Horn, Franziska; Keshavan, Anisha; Triantafyllou, Christina; Saygin, Zeynep M.; Whitfield-Gabrieli, Susan; Hofmann, Stefan G.; Pollack, Mark; Gabrieli, John D.
2013-01-01
Context Current behavioral measures poorly predict treatment outcome in social anxiety disorder (SAD). To our knowledge, this is the first study to examine neuroimaging-based treatment prediction in SAD. Objective To measure brain activation in patients with SAD as a biomarker to predict subsequent response to cognitive behavioral therapy (CBT). Design Functional magnetic resonance imaging (fMRI) data were collected prior to CBT intervention. Changes in clinical status were regressed on brain responses and tested for selectivity for social stimuli. Setting Patients were treated with protocol-based CBT at anxiety disorder programs at Boston University or Massachusetts General Hospital and underwent neuroimaging data collection at Massachusetts Institute of Technology. Patients Thirty-nine medication-free patients meeting DSM-IV criteria for the generalized subtype of SAD. Interventions Brain responses to angry vs neutral faces or emotional vs neutral scenes were examined with fMRI prior to initiation of CBT. Main Outcome Measures Whole-brain regression analyses with differential fMRI responses for angry vs neutral faces and changes in Liebowitz Social Anxiety Scale score as the treatment outcome measure. Results Pretreatment responses significantly predicted subsequent treatment outcome of patients selectively for social stimuli and particularly in regions of higher-order visual cortex. Combining the brain measures with information on clinical severity accounted for more than 40% of the variance in treatment response and substantially exceeded predictions based on clinical measures at baseline. Prediction success was unaffected by testing for potential confounding factors such as depression severity at baseline. Conclusions The results suggest that brain imaging can provide biomarkers that substantially improve predictions for the success of cognitive behavioral interventions and more generally suggest that such biomarkers may offer evidence-based, personalized medicine approaches for optimally selecting among treatment options for a patient. PMID:22945462
Affective Dynamics of Leadership: An Experimental Test of Affect Control Theory
ERIC Educational Resources Information Center
Schroder, Tobias; Scholl, Wolfgang
2009-01-01
Affect Control Theory (ACT; Heise 1979, 2007) states that people control social interactions by striving to maintain culturally shared feelings about the situation. The theory is based on mathematical models of language-based impression formation. In a laboratory experiment, we tested the predictive power of a new German-language ACT model with…
A Test of Two Alternative Cognitive Processing Models: Learning Styles and Dual Coding
ERIC Educational Resources Information Center
Cuevas, Joshua; Dawson, Bryan L.
2018-01-01
This study tested two cognitive models, learning styles and dual coding, which make contradictory predictions about how learners process and retain visual and auditory information. Learning styles-based instructional practices are common in educational environments despite a questionable research base, while the use of dual coding is less…
Petersen, Japke F; Stuiver, Martijn M; Timmermans, Adriana J; Chen, Amy; Zhang, Hongzhen; O'Neill, James P; Deady, Sandra; Vander Poorten, Vincent; Meulemans, Jeroen; Wennerberg, Johan; Skroder, Carl; Day, Andrew T; Koch, Wayne; van den Brekel, Michiel W M
2018-05-01
TNM-classification inadequately estimates patient-specific overall survival (OS). We aimed to improve this by developing a risk-prediction model for patients with advanced larynx cancer. Cohort study. We developed a risk prediction model to estimate the 5-year OS rate based on a cohort of 3,442 patients with T3T4N0N+M0 larynx cancer. The model was internally validated using bootstrapping samples and externally validated on patient data from five external centers (n = 770). The main outcome was performance of the model as tested by discrimination, calibration, and the ability to distinguish risk groups based on tertiles from the derivation dataset. The model performance was compared to a model based on T and N classification only. We included age, gender, T and N classification, and subsite as prognostic variables in the standard model. After external validation, the standard model had a significantly better fit than a model based on T and N classification alone (C statistic, 0.59 vs. 0.55, P < .001). The model was able to distinguish well among three risk groups based on tertiles of the risk score. Adding treatment modality to the model did not decrease the predictive power. As a post hoc analysis, we tested the added value of comorbidity as scored by American Society of Anesthesiologists score in a subsample, which increased the C statistic to 0.68. A risk prediction model for patients with advanced larynx cancer, consisting of readily available clinical variables, gives more accurate estimations of the estimated 5-year survival rate when compared to a model based on T and N classification alone. 2c. Laryngoscope, 128:1140-1145, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Predicting one repetition maximum equations accuracy in paralympic rowers with motor disabilities.
Schwingel, Paulo A; Porto, Yuri C; Dias, Marcelo C M; Moreira, Mônica M; Zoppi, Cláudio C
2009-05-01
Predicting one repetition maximum equations accuracy in paralympic rowers Resistance training intensity is prescribed using percentiles of the maximum strength, defined as the maximum tension generated for a muscle or muscular group. This value is found through the application of the one maximal repetition (1RM) test. One maximal repetition test demands time and still is not appropriate for some populations because of the risk it offers. In recent years, the prediction of maximal strength, through predicting equations, has been used to prevent the inconveniences of the 1RM test. The purpose of this study was to verify the accuracy of 12 1RM predicting equations for disabled rowers. Nine male paralympic rowers (7 one-leg amputated rowers and 2 cerebral paralyzed rowers; age, 30 +/- 7.9 years; height, 175.1 +/- 5.9 cm; weight, 69 +/- 13.6 kg) performed 1RM test for lying T-bar row and flat barbell bench press exercises to determine upper-body strength and leg press exercise to determine lower-body strength. One maximal repetition test was performed, and based on submaximal repetitions loads, several linear and exponential equations models were tested with regard of their accuracy. We did not find statistical differences for lying T-bar row and bench press exercises between measured and predicted 1RM values (p = 0.84 and 0.23 for lying T-bar row and flat barbell bench press, respectively); however, leg press exercise reached a high significant difference between measured and predicted values (p < 0.01). In conclusion, rowers with motor disabilities tolerate 1RM testing procedures, and predicting 1RM equations are accurate for bench press and lying T-bar row, but not for leg press, in this kind of athlete.
Cassini RTG acceptance test results and RTG performance on Galileo and Ulysses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, C.E.; Klee, P.M.
Flight acceptance testing has been completed for the RTGs to be used on the Cassini spacecraft which is scheduled for an October 6, 1997 launch to Saturn. The acceptance test program includes vibration tests, magnetic field measurements, mass properties (weight and c.g.) and thermal vacuum test. This paper presents the thermal vacuum test results. Three RTGs are to be used, F-2, F-6, and F-7. F-5 is the backup RTG, as it was for the Galileo and Ulysses missions launched in 1989 and 1990, respectively. RTG performance measured during the thermal vacuum tests carried out at the Mound Laboratory facility metmore » all specification requirements. Beginning of mission (BOM) and end of mission (EOM) power predictions have been made based on these tests results. BOM power is predicted to be 888 watts compared to the minimum requirement of 826 watts. Degradation models predict the EOM power after 16 years is to be 640 watts compared to a minimum requirement of 596 watts. Results of small scale module tests are also shown. The modules contain couples from the qualification and flight production runs. The tests have exceeded 28,000 hours (3.2 years) and are continuing to provide increased confidence in the predicted long term performance of the Cassini RTGs. All test results indicate that the power requirements of the Cassini spacecraft will be met. BOM and EOM power margins of over 5% are predicted. Power output from telemetry for the two Galileo RTGs are shown from the 1989 launch to the recent Jupiter encounter. Comparisons of predicted, measured and required performance are shown. Telemetry data are also shown for the RTG on the Ulysses spacecraft which completed its planned mission in 1995 and is now in the extended mission.« less
Cassini RTG acceptance test results and RTG performance on Galileo and Ulysses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, C.E.; Klee, P.M.
Flight acceptance testing has been completed for the RTGs to be used on the Cassini spacecraft which is scheduled for an October 6, 1997 launch to Saturn. The acceptance test program includes vibration tests, magnetic field measurements, properties (weight and c.g.) and thermal vacuum test. This paper presents The thermal vacuum test results. Three RTGs are to be used, F-2, F-6, and F-7. F-5 is tile back-up RTG, as it was for the Galileo and Ulysses missions launched in 1989 and 1990, respectively. RTG performance measured during the thermal vacuum tests carried out at die Mound Laboratory facility met allmore » specification requirements. Beginning of mission (BOM) and end of mission (EOM) power predictions have been made based on than tests results. BOM power is predicted to be 888 watts compared to the minimum requirement of 826 watts. Degradation models predict the EOM power after 16 years is to be 640 watts compared to a minimum requirement of 596 watts. Results of small scale module tests are also showing. The modules contain couples from the qualification and flight production runs. The tests have exceeded 28,000 hours (3.2 years) and are continuing to provide increased confidence in the predicted long term performance of the Cassini RTGs. All test results indicate that the power requirements of the Cassini spacecraft will be met. BOM and EOM power margins of over five percent are predicted. Power output from telemetry for the two Galileo RTGs are shown from the 1989 launch to the recent Jupiter encounter. Comparisons of predicted, measured and required performance are shown. Telemetry data are also shown for the RTG on the Ulysses spacecraft which completed its planned mission in 1995 and is now in the extended mission.« less
Cassini RTG Acceptance Test Results and RTG Performance on Galileo and Ulysses
DOE R&D Accomplishments Database
Kelly, C. E.; Klee, P. M.
1997-06-01
Flight acceptance testing has been completed for the RTGs to be used on the Cassini spacecraft which is scheduled for an October 6, 1997 launch to Saturn. The acceptance test program includes vibration tests, magnetic field measurements, properties (weight and c.g.) and thermal vacuum test. This paper presents The thermal vacuum test results. Three RTGs are to be used, F 2, F 6, and F 7. F 5 is tile back up RTG, as it was for the Galileo and Ulysses missions launched in 1989 and 1990, respectively. RTG performance measured during the thermal vacuum tests carried out at die Mound Laboratory facility met all specification requirements. Beginning of mission (BOM) and end of mission (EOM) power predictions have been made based on than tests results. BOM power is predicted to be 888 watts compared to the minimum requirement of 826 watts. Degradation models predict the EOM power after 16 years is to be 640 watts compared to a minimum requirement of 596 watts. Results of small scale module tests are also showing. The modules contain couples from the qualification and flight production runs. The tests have exceeded 28,000 hours (3.2 years) and are continuing to provide increased confidence in the predicted long term performance of the Cassini RTGs. All test results indicate that the power requirements of the Cassini spacecraft will be met. BOM and EOM power margins of over five percent are predicted. Power output from telemetry for the two Galileo RTGs are shown from the 1989 launch to the recent Jupiter encounter. Comparisons of predicted, measured and required performance are shown. Telemetry data are also shown for the RTG on the Ulysses spacecraft which completed its planned mission in 1995 and is now in the extended mission.
Salvaggio, C N; Forman, E J; Garnsey, H M; Treff, N R; Scott, R T
2014-09-01
Polar body (polar body) biopsy represents one possible solution to performing comprehensive chromosome screening (CCS). This study adds to what is known about the predictive value of polar body based testing for the genetic status of the resulting embryo, but more importantly, provides the first evaluation of the predictive value for actual clinical outcomes after embryo transfer. SNP array was performed on first polar body, second polar body, and either a blastomere or trophectoderm biopsy, or the entire arrested embryo. Concordance of the polar body-based prediction with the observed diagnoses in the embryos was assessed. In addition, the predictive value of the polar body -based diagnosis for the specific clinical outcome of transferred embryos was evaluated through the use of DNA fingerprinting to track individual embryos. There were 459 embryos analyzed from 96 patients with a mean maternal age of 35.3. The polar body-based predictive value for the embryo based diagnosis was 70.3%. The blastocyst implantation predictive value of a euploid trophectoderm was higher than from euploid polar bodies (51% versus 40%). The cleavage stage embryo implantation predictive value of a euploid blastomere was also higher than from euploid polar bodies (31% versus 22%). Polar body based aneuploidy screening results were less predictive of actual clinical outcomes than direct embryo assessment and may not be adequate to improve sustained implantation rates. In nearly one-third of cases the polar body based analysis failed to predict the ploidy of the embryo. This imprecision may hinder efforts for polar body based CCS to improve IVF clinical outcomes.
Hernando, Barbara; Ibañez, Maria Victoria; Deserio-Cuesta, Julio Alberto; Soria-Navarro, Raquel; Vilar-Sastre, Inca; Martinez-Cadenas, Conrado
2018-03-01
Prediction of human pigmentation traits, one of the most differentiable externally visible characteristics among individuals, from biological samples represents a useful tool in the field of forensic DNA phenotyping. In spite of freckling being a relatively common pigmentation characteristic in Europeans, little is known about the genetic basis of this largely genetically determined phenotype in southern European populations. In this work, we explored the predictive capacity of eight freckle and sunlight sensitivity-related genes in 458 individuals (266 non-freckled controls and 192 freckled cases) from Spain. Four loci were associated with freckling (MC1R, IRF4, ASIP and BNC2), and female sex was also found to be a predictive factor for having a freckling phenotype in our population. After identifying the most informative genetic variants responsible for human ephelides occurrence in our sample set, we developed a DNA-based freckle prediction model using a multivariate regression approach. Once developed, the capabilities of the prediction model were tested by a repeated 10-fold cross-validation approach. The proportion of correctly predicted individuals using the DNA-based freckle prediction model was 74.13%. The implementation of sex into the DNA-based freckle prediction model slightly improved the overall prediction accuracy by 2.19% (76.32%). Further evaluation of the newly-generated prediction model was performed by assessing the model's performance in a new cohort of 212 Spanish individuals, reaching a classification success rate of 74.61%. Validation of this prediction model may be carried out in larger populations, including samples from different European populations. Further research to validate and improve this newly-generated freckle prediction model will be needed before its forensic application. Together with DNA tests already validated for eye and hair colour prediction, this freckle prediction model may lead to a substantially more detailed physical description of unknown individuals from DNA found at the crime scene. Copyright © 2017 Elsevier B.V. All rights reserved.
How to test for partially predictable chaos.
Wernecke, Hendrik; Sándor, Bulcsú; Gros, Claudius
2017-04-24
For a chaotic system pairs of initially close-by trajectories become eventually fully uncorrelated on the attracting set. This process of decorrelation can split into an initial exponential decrease and a subsequent diffusive process on the chaotic attractor causing the final loss of predictability. Both processes can be either of the same or of very different time scales. In the latter case the two trajectories linger within a finite but small distance (with respect to the overall extent of the attractor) for exceedingly long times and remain partially predictable. Standard tests for chaos widely use inter-orbital correlations as an indicator. However, testing partially predictable chaos yields mostly ambiguous results, as this type of chaos is characterized by attractors of fractally broadened braids. For a resolution we introduce a novel 0-1 indicator for chaos based on the cross-distance scaling of pairs of initially close trajectories. This test robustly discriminates chaos, including partially predictable chaos, from laminar flow. Additionally using the finite time cross-correlation of pairs of initially close trajectories, we are able to identify laminar flow as well as strong and partially predictable chaos in a 0-1 manner solely from the properties of pairs of trajectories.
Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves.
Lee, Wen-Chung; Wu, Yun-Chun
2016-01-01
The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities.Based on decision theory, the authors propose an alternative index, the "average deviation about the probability threshold" (ADAPT).An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model.Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models.
Object-color-signal prediction using wraparound Gaussian metamers.
Mirzaei, Hamidreza; Funt, Brian
2014-07-01
Alexander Logvinenko introduced an object-color atlas based on idealized reflectances called rectangular metamers in 2009. For a given color signal, the atlas specifies a unique reflectance that is metameric to it under the given illuminant. The atlas is complete and illuminant invariant, but not possible to implement in practice. He later introduced a parametric representation of the object-color atlas based on smoother "wraparound Gaussian" functions. In this paper, these wraparound Gaussians are used in predicting illuminant-induced color signal changes. The method proposed in this paper is based on computationally "relighting" that reflectance to determine what its color signal would be under any other illuminant. Since that reflectance is in the metamer set the prediction is also physically realizable, which cannot be guaranteed for predictions obtained via von Kries scaling. Testing on Munsell spectra and a multispectral image shows that the proposed method outperforms the predictions of both those based on von Kries scaling and those based on the Bradford transform.
Patlewicz, Grace; Casati, Silvia; Basketter, David A; Asturiol, David; Roberts, David W; Lepoittevin, Jean-Pierre; Worth, Andrew P; Aschberger, Karin
2016-12-01
Predictive testing to characterize substances for their skin sensitization potential has historically been based on animal tests such as the Local Lymph Node Assay (LLNA). In recent years, regulations in the cosmetics and chemicals sectors have provided strong impetus to develop non-animal alternatives. Three test methods have undergone OECD validation: the direct peptide reactivity assay (DPRA), the KeratinoSens™ and the human Cell Line Activation Test (h-CLAT). Whilst these methods perform relatively well in predicting LLNA results, a concern raised is their ability to predict chemicals that need activation to be sensitizing (pre- or pro-haptens). This current study reviewed an EURL ECVAM dataset of 127 substances for which information was available in the LLNA and three non-animal test methods. Twenty eight of the sensitizers needed to be activated, with the majority being pre-haptens. These were correctly identified by 1 or more of the test methods. Six substances were categorized exclusively as pro-haptens, but were correctly identified by at least one of the cell-based assays. The analysis here showed that skin metabolism was not likely to be a major consideration for assessing sensitization potential and that sensitizers requiring activation could be identified correctly using one or more of the current non-animal methods. Published by Elsevier Inc.
Food for Thought ... Mechanistic Validation
Hartung, Thomas; Hoffmann, Sebastian; Stephens, Martin
2013-01-01
Summary Validation of new approaches in regulatory toxicology is commonly defined as the independent assessment of the reproducibility and relevance (the scientific basis and predictive capacity) of a test for a particular purpose. In large ring trials, the emphasis to date has been mainly on reproducibility and predictive capacity (comparison to the traditional test) with less attention given to the scientific or mechanistic basis. Assessing predictive capacity is difficult for novel approaches (which are based on mechanism), such as pathways of toxicity or the complex networks within the organism (systems toxicology). This is highly relevant for implementing Toxicology for the 21st Century, either by high-throughput testing in the ToxCast/ Tox21 project or omics-based testing in the Human Toxome Project. This article explores the mostly neglected assessment of a test's scientific basis, which moves mechanism and causality to the foreground when validating/qualifying tests. Such mechanistic validation faces the problem of establishing causality in complex systems. However, pragmatic adaptations of the Bradford Hill criteria, as well as bioinformatic tools, are emerging. As critical infrastructures of the organism are perturbed by a toxic mechanism we argue that by focusing on the target of toxicity and its vulnerability, in addition to the way it is perturbed, we can anchor the identification of the mechanism and its verification. PMID:23665802
Predictive models of safety based on audit findings: Part 1: Model development and reliability.
Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor
2013-03-01
This consecutive study was aimed at the quantitative validation of safety audit tools as predictors of safety performance, as we were unable to find prior studies that tested audit validity against safety outcomes. An aviation maintenance domain was chosen for this work as both audits and safety outcomes are currently prescribed and regulated. In Part 1, we developed a Human Factors/Ergonomics classification framework based on HFACS model (Shappell and Wiegmann, 2001a,b), for the human errors detected by audits, because merely counting audit findings did not predict future safety. The framework was tested for measurement reliability using four participants, two of whom classified errors on 1238 audit reports. Kappa values leveled out after about 200 audits at between 0.5 and 0.8 for different tiers of errors categories. This showed sufficient reliability to proceed with prediction validity testing in Part 2. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Stadnicka-Michalak, Julita; Tanneberger, Katrin; Schirmer, Kristin; Ashauer, Roman
2014-01-01
Effect concentrations in the toxicity assessment of chemicals with fish and fish cells are generally based on external exposure concentrations. External concentrations as dose metrics, may, however, hamper interpretation and extrapolation of toxicological effects because it is the internal concentration that gives rise to the biological effective dose. Thus, we need to understand the relationship between the external and internal concentrations of chemicals. The objectives of this study were to: (i) elucidate the time-course of the concentration of chemicals with a wide range of physicochemical properties in the compartments of an in vitro test system, (ii) derive a predictive model for toxicokinetics in the in vitro test system, (iii) test the hypothesis that internal effect concentrations in fish (in vivo) and fish cell lines (in vitro) correlate, and (iv) develop a quantitative in vitro to in vivo toxicity extrapolation method for fish acute toxicity. To achieve these goals, time-dependent amounts of organic chemicals were measured in medium, cells (RTgill-W1) and the plastic of exposure wells. Then, the relation between uptake, elimination rate constants, and log KOW was investigated for cells in order to develop a toxicokinetic model. This model was used to predict internal effect concentrations in cells, which were compared with internal effect concentrations in fish gills predicted by a Physiologically Based Toxicokinetic model. Our model could predict concentrations of non-volatile organic chemicals with log KOW between 0.5 and 7 in cells. The correlation of the log ratio of internal effect concentrations in fish gills and the fish gill cell line with the log KOW was significant (r>0.85, p = 0.0008, F-test). This ratio can be predicted from the log KOW of the chemical (77% of variance explained), comprising a promising model to predict lethal effects on fish based on in vitro data. PMID:24647349
Day, Ryan; Qu, Xiaotao; Swanson, Rosemarie; Bohannan, Zach; Bliss, Robert
2011-01-01
Abstract Most current template-based structure prediction methods concentrate on finding the correct backbone conformation and then packing sidechains within that backbone. Our packing-based method derives distance constraints from conserved relative packing groups (RPGs). In our refinement approach, the RPGs provide a level of resolution that restrains global topology while allowing conformational sampling. In this study, we test our template-based structure prediction method using 51 prediction units from CASP7 experiments. RPG-based constraints are able to substantially improve approximately two-thirds of starting templates. Upon deeper investigation, we find that true positive spatial constraints, especially those non-local in sequence, derived from the RPGs were important to building nearer native models. Surprisingly, the fraction of incorrect or false positive constraints does not strongly influence the quality of the final candidate. This result indicates that our RPG-based true positive constraints sample the self-consistent, cooperative interactions of the native structure. The lack of such reinforcing cooperativity explains the weaker effect of false positive constraints. Generally, these findings are encouraging indications that RPGs will improve template-based structure prediction. PMID:21210729
2016-01-01
Background Several approaches to reduce the incidence of invasive cervical cancers exist. The approach adopted should take into account contextual factors that influence the cost-effectiveness of the available options. Objective To determine the cost-effectiveness of screening strategies combined with a vaccination program for 10-year old girls for cervical cancer prevention in Vientiane, Lao PDR. Methods A population-based dynamic compartment model was constructed. The interventions consisted of a 10-year old girl vaccination program only, or this program combined with screening strategies, i.e., visual inspection with acetic acid (VIA), cytology-based screening, rapid human papillomavirus (HPV) DNA testing, or combined VIA and cytology testing. Simulations were run over 100 years. In base-case scenario analyses, we assumed a 70% vaccination coverage with lifelong protection and a 50% screening coverage. The outcome of interest was the incremental cost per Disability-Adjusted Life Year (DALY) averted. Results In base-case scenarios, compared to the next best strategy, the model predicted that VIA screening of women aged 30–65 years old every three years, combined with vaccination, was the most attractive option, costing 2 544 international dollars (I$) per DALY averted. Meanwhile, rapid HPV DNA testing was predicted to be more attractive than cytology-based screening or its combination with VIA. Among cytology-based screening options, combined VIA with conventional cytology testing was predicted to be the most attractive option. Multi-way sensitivity analyses did not change the results. Compared to rapid HPV DNA testing, VIA had a probability of cost-effectiveness of 73%. Compared to the vaccination only option, the probability that a program consisting of screening women every five years would be cost-effective was around 60% and 80% if the willingness-to-pay threshold is fixed at one and three GDP per capita, respectively. Conclusions A VIA screening program in addition to a girl vaccination program was predicted to be the most attractive option in the health care context of Lao PDR. When compared with other screening methods, VIA was the primary recommended method for combination with vaccination in Lao PDR. PMID:27631732
Chanthavilay, Phetsavanh; Reinharz, Daniel; Mayxay, Mayfong; Phongsavan, Keokedthong; Marsden, Donald E; Moore, Lynne; White, Lisa J
2016-01-01
Several approaches to reduce the incidence of invasive cervical cancers exist. The approach adopted should take into account contextual factors that influence the cost-effectiveness of the available options. To determine the cost-effectiveness of screening strategies combined with a vaccination program for 10-year old girls for cervical cancer prevention in Vientiane, Lao PDR. A population-based dynamic compartment model was constructed. The interventions consisted of a 10-year old girl vaccination program only, or this program combined with screening strategies, i.e., visual inspection with acetic acid (VIA), cytology-based screening, rapid human papillomavirus (HPV) DNA testing, or combined VIA and cytology testing. Simulations were run over 100 years. In base-case scenario analyses, we assumed a 70% vaccination coverage with lifelong protection and a 50% screening coverage. The outcome of interest was the incremental cost per Disability-Adjusted Life Year (DALY) averted. In base-case scenarios, compared to the next best strategy, the model predicted that VIA screening of women aged 30-65 years old every three years, combined with vaccination, was the most attractive option, costing 2 544 international dollars (I$) per DALY averted. Meanwhile, rapid HPV DNA testing was predicted to be more attractive than cytology-based screening or its combination with VIA. Among cytology-based screening options, combined VIA with conventional cytology testing was predicted to be the most attractive option. Multi-way sensitivity analyses did not change the results. Compared to rapid HPV DNA testing, VIA had a probability of cost-effectiveness of 73%. Compared to the vaccination only option, the probability that a program consisting of screening women every five years would be cost-effective was around 60% and 80% if the willingness-to-pay threshold is fixed at one and three GDP per capita, respectively. A VIA screening program in addition to a girl vaccination program was predicted to be the most attractive option in the health care context of Lao PDR. When compared with other screening methods, VIA was the primary recommended method for combination with vaccination in Lao PDR.
Lv, Yufeng; Wei, Wenhao; Huang, Zhong; Chen, Zhichao; Fang, Yuan; Pan, Lili; Han, Xueqiong; Xu, Zihai
2018-06-20
The aim of this study was to develop a novel long non-coding RNA (lncRNA) expression signature to accurately predict early recurrence for patients with hepatocellular carcinoma (HCC) after curative resection. Using expression profiles downloaded from The Cancer Genome Atlas database, we identified multiple lncRNAs with differential expression between early recurrence (ER) group and non-early recurrence (non-ER) group of HCC. Least absolute shrinkage and selection operator (LASSO) for logistic regression models were used to develop a lncRNA-based classifier for predicting ER in the training set. An independent test set was used to validated the predictive value of this classifier. Futhermore, a co-expression network based on these lncRNAs and its highly related genes was constructed and Gene Ontology and Kyoto Encyclopedia of Genes and Genomes pathway enrichment analyses of genes in the network were performed. We identified 10 differentially expressed lncRNAs, including 3 that were upregulated and 7 that were downregulated in ER group. The lncRNA-based classifier was constructed based on 7 lncRNAs (AL035661.1, PART1, AC011632.1, AC109588.1, AL365361.1, LINC00861 and LINC02084), and its accuracy was 0.83 in training set, 0.87 in test set and 0.84 in total set. And ROC curve analysis showed the AUROC was 0.741 in training set, 0.824 in the test set and 0.765 in total set. A functional enrichment analysis suggested that the genes of which is highly related to 4 lncRNAs were involved in immune system. This 7-lncRNA expression profile can effectively predict the early recurrence after surgical resection for HCC. This article is protected by copyright. All rights reserved.
ERIC Educational Resources Information Center
Opara, Ijeoma M.; Onyekuru, Bruno U.; Njoku, Joyce U.
2015-01-01
The study investigated the predictive power of school based assessment scores on students' achievement in Junior Secondary Certificate Examination (JSCE) in English and Mathematics. Two hypotheses tested at 0.05 level of significance guided the study. The study adopted an ex-post facto research design. A sample of 250 students were randomly drawn…
[Rapid test for detection of susceptibility to cefotaxime in Enterobacteriaceae].
Jiménez-Guerra, Gemma; Hoyos-Mallecot, Yannik; Rodríguez-Granger, Javier; Navarro-Marí, José María; Gutiérrez-Fernández, José
In this work an "in house" rapid test based on the change in pH that is due to hydrolysis for detecting Enterobacteriaceae susceptible to cefotaxime is evaluated. The strains of Enterobacteriaceae from 1947 urine cultures were assessed using MicroScan panels and the "in house" test. This rapid test includes red phenol solution and cefotaxime. Using MicroScan panels, 499 Enterobacteriaceae isolates were evaluated, which included 27 isolates of Escherichia coli producing extended-spectrum beta-lactamases (ESBL), 16 isolates of Klebsiella pneumoniae ESBL and 1 isolate of Klebsiella oxytoca ESBL. The "in house" test offers the following values: sensitivity 98% and specificity 97%, with negative predictive value 100% and positive predictive value 78%. The "in house" test based on the change of pH is useful in our area for detecting presumptively cefotaxime-resistant Enterobacteriaceae strains. Copyright © 2016 Asociación Argentina de Microbiología. Publicado por Elsevier España, S.L.U. All rights reserved.
Liu, Qianying; Lei, Zhixin; Zhu, Feng; Ihsan, Awais; Wang, Xu; Yuan, Zonghui
2017-01-01
Genotoxicity and carcinogenicity testing of pharmaceuticals prior to commercialization is requested by regulatory agencies. The bacterial mutagenicity test was considered having the highest accuracy of carcinogenic prediction. However, some evidences suggest that it always results in false-positive responses when the bacterial mutagenicity test is used to predict carcinogenicity. Along with major changes made to the International Committee on Harmonization guidance on genotoxicity testing [S2 (R1)], the old data (especially the cytotgenetic data) may not meet current guidelines. This review provides a compendium of retrievable results of genotoxicity and animal carcinogenicity of 136 antiparasitics. Neither genotoxicity nor carcinogenicity data is available for 84 (61.8%), while 52 (38.2%) have been evaluated in at least one genotoxicity or carcinogenicity study, and only 20 (14.7%) in both genotoxicity and carcinogenicity studies. Among 33 antiparasitics with at least one old result in in vitro genotoxicity, 15 (45.5%) are in agreement with the current ICH S2 (R1) guidance for data acceptance. Compared with other genotoxicity assays, the DNA lesions can significantly increase the accuracy of prediction of carcinogenicity. Together, a combination of DNA lesion and bacterial tests is a more accurate way to predict carcinogenicity. PMID:29170735
Landin, Wendell E; Mun, Greg C; Nims, Raymond W; Harbell, John W
2007-09-01
The cytosensor microphysiometer (mu phi) was investigated as a rapid, relatively inexpensive test to predict performance of skin cleansing wipes on the human 21-day cumulative irritation patch test (21CIPT). It indirectly measures metabolic rate changes in L929 cells as a function of test article dose, by measuring the acidification rate in a low-buffer medium. The dose producing a 50% reduction in metabolic rate (MRD50), relative to the baseline rate, is used as a measure of toxicity. The acute toxicity of the mu phi assay can be compared to the chronic toxicity of the 21CIPT, which is based largely on the exposure of test agents to the epidermal cells, resulting in damage and penetration of the stratum corneum leading to cell toxicity. Two series of surfactant-based cleansing wipe products were tested via the mu phi assay and 21CIPT. The first series, consisting of 20 products, was used to determine a prediction model. The second series of 38 products consisted of routine product development formulas or marketed products. Comparing the results from both tests, samples with an MRD50 greater than 50 mg/ml provided a 21CIPT score consistent with a product that performs satisfactorily in the market. When the MRD50 was greater than 78 mg/ml, the 21CIPT score was usually zero. The mu phi may be more sensitive than the 21CIPT for ranking minimally irritating materials. The mu phi assay is useful as a screen for predicting the performance of a wet wipes formula on the 21CIPT, and concurrently reduces the use of animals for safety testing in a product development program for cleansing wipes.
Model-based influences on humans’ choices and striatal prediction errors
Daw, Nathaniel D.; Gershman, Samuel J.; Seymour, Ben; Dayan, Peter; Dolan, Raymond J.
2011-01-01
Summary The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. PMID:21435563
Stochastic estimation of plant-available soil water under fluctuating water table depths
NASA Astrophysics Data System (ADS)
Or, Dani; Groeneveld, David P.
1994-12-01
Preservation of native valley-floor phreatophytes while pumping groundwater for export from Owens Valley, California, requires reliable predictions of plant water use. These predictions are compared with stored soil water within well field regions and serve as a basis for managing groundwater resources. Soil water measurement errors, variable recharge, unpredictable climatic conditions affecting plant water use, and modeling errors make soil water predictions uncertain and error-prone. We developed and tested a scheme based on soil water balance coupled with implementation of Kalman filtering (KF) for (1) providing physically based soil water storage predictions with prediction errors projected from the statistics of the various inputs, and (2) reducing the overall uncertainty in both estimates and predictions. The proposed KF-based scheme was tested using experimental data collected at a location on the Owens Valley floor where the water table was artificially lowered by groundwater pumping and later allowed to recover. Vegetation composition and per cent cover, climatic data, and soil water information were collected and used for developing a soil water balance. Predictions and updates of soil water storage under different types of vegetation were obtained for a period of 5 years. The main results show that: (1) the proposed predictive model provides reliable and resilient soil water estimates under a wide range of external conditions; (2) the predicted soil water storage and the error bounds provided by the model offer a realistic and rational basis for decisions such as when to curtail well field operation to ensure plant survival. The predictive model offers a practical means for accommodating simple aspects of spatial variability by considering the additional source of uncertainty as part of modeling or measurement uncertainty.
GASP: Gapped Ancestral Sequence Prediction for proteins
Edwards, Richard J; Shields, Denis C
2004-01-01
Background The prediction of ancestral protein sequences from multiple sequence alignments is useful for many bioinformatics analyses. Predicting ancestral sequences is not a simple procedure and relies on accurate alignments and phylogenies. Several algorithms exist based on Maximum Parsimony or Maximum Likelihood methods but many current implementations are unable to process residues with gaps, which may represent insertion/deletion (indel) events or sequence fragments. Results Here we present a new algorithm, GASP (Gapped Ancestral Sequence Prediction), for predicting ancestral sequences from phylogenetic trees and the corresponding multiple sequence alignments. Alignments may be of any size and contain gaps. GASP first assigns the positions of gaps in the phylogeny before using a likelihood-based approach centred on amino acid substitution matrices to assign ancestral amino acids. Important outgroup information is used by first working down from the tips of the tree to the root, using descendant data only to assign probabilities, and then working back up from the root to the tips using descendant and outgroup data to make predictions. GASP was tested on a number of simulated datasets based on real phylogenies. Prediction accuracy for ungapped data was similar to three alternative algorithms tested, with GASP performing better in some cases and worse in others. Adding simple insertions and deletions to the simulated data did not have a detrimental effect on GASP accuracy. Conclusions GASP (Gapped Ancestral Sequence Prediction) will predict ancestral sequences from multiple protein alignments of any size. Although not as accurate in all cases as some of the more sophisticated maximum likelihood approaches, it can process a wide range of input phylogenies and will predict ancestral sequences for gapped and ungapped residues alike. PMID:15350199
Cardoso, Débora Morais; Gilio, Alfredo Elias; Hsin, Shieh Huei; Machado, Beatriz Marcondes; de Paulis, Milena; Lotufo, João Paulo B; Martinez, Marina Baquerizo; Grisi, Sandra Josefina E
2013-01-01
To evaluate the impact of the routine use of rapid antigen detection test in the diagnosis and treatment of acute pharyngotonsillitis in children. This is a prospective and observational study, with a protocol compliance design established at the Emergency Unit of the University Hospital of Universidade de São Paulo for the care of children and adolescents diagnosed with acute pharyngitis. 650 children and adolescents were enrolled. Based on clinical findings, antibiotics would be prescribed for 389 patients (59.8%); using the rapid antigen detection test, they were prescribed for 286 patients (44.0%). Among the 261 children who would not have received antibiotics based on the clinical evaluation, 111 (42.5%) had positive rapid antigen detection test. The diagnosis based only on clinical evaluation showed 61.1% sensitivity, 47.7% specificity, 44.9% positive predictive value, and 57.5% negative predictive value. The clinical diagnosis of streptococcal pharyngotonsillitis had low sensitivity and specificity. The routine use of rapid antigen detection test led to the reduction of antibiotic use and the identification of a risk group for complications of streptococcal infection, since 42.5% positive rapid antigen detection test patients would not have received antibiotics based only on clinical diagnosis.
Some practical observations on the accelerated testing of Nickel-Cadmium Cells
NASA Technical Reports Server (NTRS)
Mcdermott, P. P.
1979-01-01
A large scale test of 6.0 Ah Nickel-Cadmium Cells conducted at the Naval Weapons Support Center, Crane, Indiana has demonstrated a methodology for predicting battery life based on failure data from cells cycled in an accelerated mode. After examining eight variables used to accelerate failure, it was determined that temperature and depth of discharge were the most reliable and efficient parameters for use in accelerating failure and for predicting life.
1981-10-01
Numerical predictions used in the compari- sons were obtained from the energy -based, finite-difference computer proqram CLAPP. Test specimens were clamped...edges V LONGITUDINAL STIFFENERS 45 I. Introduction 45 2. Stiffener Strain Energy 46 3. Stiffener Energy in Matrix Form 47 4. Displacement Continuity 49...that theoretical bifurcation loads predicted by the energy method represent upper bounds to the classical bifurcation loads associated with the test
NASA Astrophysics Data System (ADS)
Love, D. M.; Venturas, M.; Sperry, J.; Wang, Y.; Anderegg, W.
2017-12-01
Modeling approaches for tree stomatal control often rely on empirical fitting to provide accurate estimates of whole tree transpiration (E) and assimilation (A), which are limited in their predictive power by the data envelope used to calibrate model parameters. Optimization based models hold promise as a means to predict stomatal behavior under novel climate conditions. We designed an experiment to test a hydraulic trait based optimization model, which predicts stomatal conductance from a gain/risk approach. Optimal stomatal conductance is expected to maximize the potential carbon gain by photosynthesis, and minimize the risk to hydraulic transport imposed by cavitation. The modeled risk to the hydraulic network is assessed from cavitation vulnerability curves, a commonly measured physiological trait in woody plant species. Over a growing season garden grown plots of aspen (Populus tremuloides, Michx.) and ponderosa pine (Pinus ponderosa, Douglas) were subjected to three distinct drought treatments (moderate, severe, severe with rehydration) relative to a control plot to test model predictions. Model outputs of predicted E, A, and xylem pressure can be directly compared to both continuous data (whole tree sapflux, soil moisture) and point measurements (leaf level E, A, xylem pressure). The model also predicts levels of whole tree hydraulic impairment expected to increase mortality risk. This threshold is used to estimate survivorship in the drought treatment plots. The model can be run at two scales, either entirely from climate (meteorological inputs, irrigation) or using the physiological measurements as a starting point. These data will be used to study model performance and utility, and aid in developing the model for larger scale applications.
Verdon, Megan; Morrison, R S; Hemsworth, P H
2018-05-01
This experiment examined the effects of group composition on sow aggressive behaviour and welfare. Over 6 time replicates, 360 sows (parity 1-6) were mixed into groups (10 sows per pen, 1.8 m 2 /sow) composed of animals that were predicted to be aggressive (n = 18 pens) or groups composed of animals that were randomly selected (n = 18 pens). Predicted aggressive sows were selected based on a model-pig test that has been shown to be related to the aggressive behaviour of parity 2 sows when subsequently mixed in groups. Measurements were taken on aggression delivered post-mixing, and aggression delivered around feeding, fresh skin injuries and plasma cortisol concentrations at days 2 and 24 post-mixing. Live weight gain, litter size (born alive, total born, stillborn piglets), and farrowing rate were also recorded. Manipulating the group composition based on predicted sow aggressiveness had no effect (P > 0.05) on sow aggression delivered at mixing or around feeding, fresh injuries, cortisol, weight gain from day 2 to day 24, farrowing rate, or litter size. The lack of treatment effects in the present experiment could be attributed to (1) a failure of the model-pig test to predict aggression in older sows in groups, or (2) the dependence of the expression of the aggressive phenotype on factors such as social experience and characteristics (e.g., physical size and aggressive phenotype) of pen mates. This research draws attention to the intrinsic difficulties associated with predicting behaviour across contexts, particularly when the behaviour is highly dependent on interactions with conspecifics, and highlights the social complexities involved in the presentation of a behavioural phenotype. Copyright © 2018 Elsevier B.V. All rights reserved.
Echigoya, Yusuke; Mouly, Vincent; Garcia, Luis; Yokota, Toshifumi; Duddy, William
2015-01-01
The use of antisense ‘splice-switching’ oligonucleotides to induce exon skipping represents a potential therapeutic approach to various human genetic diseases. It has achieved greatest maturity in exon skipping of the dystrophin transcript in Duchenne muscular dystrophy (DMD), for which several clinical trials are completed or ongoing, and a large body of data exists describing tested oligonucleotides and their efficacy. The rational design of an exon skipping oligonucleotide involves the choice of an antisense sequence, usually between 15 and 32 nucleotides, targeting the exon that is to be skipped. Although parameters describing the target site can be computationally estimated and several have been identified to correlate with efficacy, methods to predict efficacy are limited. Here, an in silico pre-screening approach is proposed, based on predictive statistical modelling. Previous DMD data were compiled together and, for each oligonucleotide, some 60 descriptors were considered. Statistical modelling approaches were applied to derive algorithms that predict exon skipping for a given target site. We confirmed (1) the binding energetics of the oligonucleotide to the RNA, and (2) the distance in bases of the target site from the splice acceptor site, as the two most predictive parameters, and we included these and several other parameters (while discounting many) into an in silico screening process, based on their capacity to predict high or low efficacy in either phosphorodiamidate morpholino oligomers (89% correctly predicted) and/or 2’O Methyl RNA oligonucleotides (76% correctly predicted). Predictions correlated strongly with in vitro testing for sixteen de novo PMO sequences targeting various positions on DMD exons 44 (R2 0.89) and 53 (R2 0.89), one of which represents a potential novel candidate for clinical trials. We provide these algorithms together with a computational tool that facilitates screening to predict exon skipping efficacy at each position of a target exon. PMID:25816009
Kwon, Andrew T.; Chou, Alice Yi; Arenillas, David J.; Wasserman, Wyeth W.
2011-01-01
We performed a genome-wide scan for muscle-specific cis-regulatory modules (CRMs) using three computational prediction programs. Based on the predictions, 339 candidate CRMs were tested in cell culture with NIH3T3 fibroblasts and C2C12 myoblasts for capacity to direct selective reporter gene expression to differentiated C2C12 myotubes. A subset of 19 CRMs validated as functional in the assay. The rate of predictive success reveals striking limitations of computational regulatory sequence analysis methods for CRM discovery. Motif-based methods performed no better than predictions based only on sequence conservation. Analysis of the properties of the functional sequences relative to inactive sequences identifies nucleotide sequence composition can be an important characteristic to incorporate in future methods for improved predictive specificity. Muscle-related TFBSs predicted within the functional sequences display greater sequence conservation than non-TFBS flanking regions. Comparison with recent MyoD and histone modification ChIP-Seq data supports the validity of the functional regions. PMID:22144875
Estimation of relative effectiveness of phylogenetic programs by machine learning.
Krivozubov, Mikhail; Goebels, Florian; Spirin, Sergei
2014-04-01
Reconstruction of phylogeny of a protein family from a sequence alignment can produce results of different quality. Our goal is to predict the quality of phylogeny reconstruction basing on features that can be extracted from the input alignment. We used Fitch-Margoliash (FM) method of phylogeny reconstruction and random forest as a predictor. For training and testing the predictor, alignments of orthologous series (OS) were used, for which the result of phylogeny reconstruction can be evaluated by comparison with trees of corresponding organisms. Our results show that the quality of phylogeny reconstruction can be predicted with more than 80% precision. Also, we tried to predict which phylogeny reconstruction method, FM or UPGMA, is better for a particular alignment. With the used set of features, among alignments for which the obtained predictor predicts a better performance of UPGMA, 56% really give a better result with UPGMA. Taking into account that in our testing set only for 34% alignments UPGMA performs better, this result shows a principal possibility to predict the better phylogeny reconstruction method basing on features of a sequence alignment.
NASA Technical Reports Server (NTRS)
Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John
2011-01-01
A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.
Casillas, Jean-Marie; Joussain, Charles; Gremeaux, Vincent; Hannequin, Armelle; Rapin, Amandine; Laurent, Yves; Benaïm, Charles
2015-02-01
To develop a new predictive model of maximal heart rate based on two walking tests at different speeds (comfortable and brisk walking) as an alternative to a cardiopulmonary exercise test during cardiac rehabilitation. Evaluation of a clinical assessment tool. A Cardiac Rehabilitation Department in France. A total of 148 patients (133 men), mean age of 59 ±9 years, at the end of an outpatient cardiac rehabilitation programme. Patients successively performed a 6-minute walk test, a 200 m fast-walk test (200mFWT), and a cardiopulmonary exercise test, with measure of heart rate at the end of each test. An all-possible regression procedure was used to determine the best predictive regression models of maximal heart rate. The best model was compared with the Fox equation in term of predictive error of maximal heart rate using the paired t-test. Results of the two walking tests correlated significantly with maximal heart rate determined during the cardiopulmonary exercise test, whereas anthropometric parameters and resting heart rate did not. The simplified predictive model with the most acceptable mean error was: maximal heart rate = 130 - 0.6 × age + 0.3 × HR200mFWT (R(2) = 0.24). This model was superior to the Fox formula (R(2) = 0.138). The relationship between training target heart rate calculated from measured reserve heart rate and that established using this predictive model was statistically significant (r = 0.528, p < 10(-6)). A formula combining heart rate measured during a safe simple fast walk test and age is more efficient than an equation only including age to predict maximal heart rate and training target heart rate. © The Author(s) 2014.
Improved nonlinear prediction method
NASA Astrophysics Data System (ADS)
Adenan, Nur Hamiza; Md Noorani, Mohd Salmi
2014-06-01
The analysis and prediction of time series data have been addressed by researchers. Many techniques have been developed to be applied in various areas, such as weather forecasting, financial markets and hydrological phenomena involving data that are contaminated by noise. Therefore, various techniques to improve the method have been introduced to analyze and predict time series data. In respect of the importance of analysis and the accuracy of the prediction result, a study was undertaken to test the effectiveness of the improved nonlinear prediction method for data that contain noise. The improved nonlinear prediction method involves the formation of composite serial data based on the successive differences of the time series. Then, the phase space reconstruction was performed on the composite data (one-dimensional) to reconstruct a number of space dimensions. Finally the local linear approximation method was employed to make a prediction based on the phase space. This improved method was tested with data series Logistics that contain 0%, 5%, 10%, 20% and 30% of noise. The results show that by using the improved method, the predictions were found to be in close agreement with the observed ones. The correlation coefficient was close to one when the improved method was applied on data with up to 10% noise. Thus, an improvement to analyze data with noise without involving any noise reduction method was introduced to predict the time series data.
Development of an accident duration prediction model on the Korean Freeway Systems.
Chung, Younshik
2010-01-01
Since duration prediction is one of the most important steps in an accident management process, there have been several approaches developed for modeling accident duration. This paper presents a model for the purpose of accident duration prediction based on accurately recorded and large accident dataset from the Korean Freeway Systems. To develop the duration prediction model, this study utilizes the log-logistic accelerated failure time (AFT) metric model and a 2-year accident duration dataset from 2006 to 2007. Specifically, the 2006 dataset is utilized to develop the prediction model and then, the 2007 dataset was employed to test the temporal transferability of the 2006 model. Although the duration prediction model has limitations such as large prediction error due to the individual differences of the accident treatment teams in terms of clearing similar accidents, the results from the 2006 model yielded a reasonable prediction based on the mean absolute percentage error (MAPE) scale. Additionally, the results of the statistical test for temporal transferability indicated that the estimated parameters in the duration prediction model are stable over time. Thus, this temporal stability suggests that the model may have potential to be used as a basis for making rational diversion and dispatching decisions in the event of an accident. Ultimately, such information will beneficially help in mitigating traffic congestion due to accidents.
Nanavati, Tania; Seemaladinne, Nirupama; Regier, Michael; Yossuck, Panitan; Pergami, Paola
2015-01-01
Background Neonatal hypoxic ischemic encephalopathy (HIE) is a major cause of mortality, morbidity, and long-term neurological deficits. Despite the availability of neuroimaging and neurophysiological testing, tools for accurate early diagnosis and prediction of developmental outcome are still lacking. The goal of this study was to determine if combined use of magnetic resonance imaging (MRI) and electroencephalography (EEG) findings could support outcome prediction. Methods We retrospectively reviewed records of 17 HIE neonates, classified brain MRI and EEG findings based on severity, and assessed clinical outcome up to 48 months. We determined the relation between MRI/EEG findings and clinical outcome. Results We demonstrated a significant relationship between MRI findings and clinical outcome (Fisher’s exact test, p = 0.017). EEG provided no additional information about the outcome beyond that contained in the MRI score. The statistical model for outcome prediction based on random forests suggested that EEG readings at 24 hours and 72 hours could be important variables for outcome prediction, but this needs to be investigated further. Conclusion Caution should be used when discussing prognosis for neonates with mild-to-moderate HIE based on early MR imaging and EEG findings. A robust, quantitative marker of HIE severity that allows for accurate prediction of long-term outcome, particularly for mild-to-moderate cases, is still needed. PMID:25862075
NASA Astrophysics Data System (ADS)
Wanto, Anjar; Zarlis, Muhammad; Sawaluddin; Hartama, Dedy
2017-12-01
Backpropagation is a good artificial neural network algorithm used to predict, one of which is to predict the rate of Consumer Price Index (CPI) based on the foodstuff sector. While conjugate gradient fletcher reeves is a suitable optimization method when juxtaposed with backpropagation method, because this method can shorten iteration without reducing the quality of training and testing result. Consumer Price Index (CPI) data that will be predicted to come from the Central Statistics Agency (BPS) Pematangsiantar. The results of this study will be expected to contribute to the government in making policies to improve economic growth. In this study, the data obtained will be processed by conducting training and testing with artificial neural network backpropagation by using parameter learning rate 0,01 and target error minimum that is 0.001-0,09. The training network is built with binary and bipolar sigmoid activation functions. After the results with backpropagation are obtained, it will then be optimized using the conjugate gradient fletcher reeves method by conducting the same training and testing based on 5 predefined network architectures. The result, the method used can increase the speed and accuracy result.
A pilot study of NMR-based sensory prediction of roasted coffee bean extracts.
Wei, Feifei; Furihata, Kazuo; Miyakawa, Takuya; Tanokura, Masaru
2014-01-01
Nuclear magnetic resonance (NMR) spectroscopy can be considered a kind of "magnetic tongue" for the characterisation and prediction of the tastes of foods, since it provides a wealth of information in a nondestructive and nontargeted manner. In the present study, the chemical substances in roasted coffee bean extracts that could distinguish and predict the different sensations of coffee taste were identified by the combination of NMR-based metabolomics and human sensory test and the application of the multivariate projection method of orthogonal projection to latent structures (OPLS). In addition, the tastes of commercial coffee beans were successfully predicted based on their NMR metabolite profiles using our OPLS model, suggesting that NMR-based metabolomics accompanied with multiple statistical models is convenient, fast and accurate for the sensory evaluation of coffee. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Allgood, Daniel C.
2016-01-01
The objective of the presented work was to develop validated computational fluid dynamics (CFD) based methodologies for predicting propellant detonations and their associated blast environments. Applications of interest were scenarios relevant to rocket propulsion test and launch facilities. All model development was conducted within the framework of the Loci/CHEM CFD tool due to its reliability and robustness in predicting high-speed combusting flow-fields associated with rocket engines and plumes. During the course of the project, verification and validation studies were completed for hydrogen-fueled detonation phenomena such as shock-induced combustion, confined detonation waves, vapor cloud explosions, and deflagration-to-detonation transition (DDT) processes. The DDT validation cases included predicting flame acceleration mechanisms associated with turbulent flame-jets and flow-obstacles. Excellent comparison between test data and model predictions were observed. The proposed CFD methodology was then successfully applied to model a detonation event that occurred during liquid oxygen/gaseous hydrogen rocket diffuser testing at NASA Stennis Space Center.
Grain growth prediction based on data assimilation by implementing 4DVar on multi-phase-field model
NASA Astrophysics Data System (ADS)
Ito, Shin-ichi; Nagao, Hiromichi; Kasuya, Tadashi; Inoue, Junya
2017-12-01
We propose a method to predict grain growth based on data assimilation by using a four-dimensional variational method (4DVar). When implemented on a multi-phase-field model, the proposed method allows us to calculate the predicted grain structures and uncertainties in them that depend on the quality and quantity of the observational data. We confirm through numerical tests involving synthetic data that the proposed method correctly reproduces the true phase-field assumed in advance. Furthermore, it successfully quantifies uncertainties in the predicted grain structures, where such uncertainty quantifications provide valuable information to optimize the experimental design.
Predicting longshore gradients in longshore transport: the CERC formula compared to Delft3D
List, Jeffrey H.; Hanes, Daniel M.; Ruggiero, Peter
2007-01-01
The prediction of longshore transport gradients is critical for forecasting shoreline change. We employ simple test cases consisting of shoreface pits at varying distances from the shoreline to compare the longshore transport gradients predicted by the CERC formula against results derived from the process-based model Delft3D. Results show that while in some cases the two approaches give very similar results, in many cases the results diverge greatly. Although neither approach is validated with field data here, the Delft3D-based transport gradients provide much more consistent predictions of erosional and accretionary zones as the pit location varies across the shoreface.
Empirical Prediction of Aircraft Landing Gear Noise
NASA Technical Reports Server (NTRS)
Golub, Robert A. (Technical Monitor); Guo, Yue-Ping
2005-01-01
This report documents a semi-empirical/semi-analytical method for landing gear noise prediction. The method is based on scaling laws of the theory of aerodynamic noise generation and correlation of these scaling laws with current available test data. The former gives the method a sound theoretical foundation and the latter quantitatively determines the relations between the parameters of the landing gear assembly and the far field noise, enabling practical predictions of aircraft landing gear noise, both for parametric trends and for absolute noise levels. The prediction model is validated by wind tunnel test data for an isolated Boeing 737 landing gear and by flight data for the Boeing 777 airplane. In both cases, the predictions agree well with data, both in parametric trends and in absolute noise levels.
NASA Astrophysics Data System (ADS)
Baumgartner, Matthew P.; Evans, David A.
2018-01-01
Two of the major ongoing challenges in computational drug discovery are predicting the binding pose and affinity of a compound to a protein. The Drug Design Data Resource Grand Challenge 2 was developed to address these problems and to drive development of new methods. The challenge provided the 2D structures of compounds for which the organizers help blinded data in the form of 35 X-ray crystal structures and 102 binding affinity measurements and challenged participants to predict the binding pose and affinity of the compounds. We tested a number of pose prediction methods as part of the challenge; we found that docking methods that incorporate protein flexibility (Induced Fit Docking) outperformed methods that treated the protein as rigid. We also found that using binding pose metadynamics, a molecular dynamics based method, to score docked poses provided the best predictions of our methods with an average RMSD of 2.01 Å. We tested both structure-based (e.g. docking) and ligand-based methods (e.g. QSAR) in the affinity prediction portion of the competition. We found that our structure-based methods based on docking with Smina (Spearman ρ = 0.614), performed slightly better than our ligand-based methods (ρ = 0.543), and had equivalent performance with the other top methods in the competition. Despite the overall good performance of our methods in comparison to other participants in the challenge, there exists significant room for improvement especially in cases such as these where protein flexibility plays such a large role.
NASA Astrophysics Data System (ADS)
Shibata, Hisaichi; Takaki, Ryoji
2017-11-01
A novel method to compute current-voltage characteristics (CVCs) of direct current positive corona discharges is formulated based on a perturbation technique. We use linearized fluid equations coupled with the linearized Poisson's equation. Townsend relation is assumed to predict CVCs apart from the linearization point. We choose coaxial cylinders as a test problem, and we have successfully predicted parameters which can determine CVCs with arbitrary inner and outer radii. It is also confirmed that the proposed method essentially does not induce numerical instabilities.
Forensic individual age estimation with DNA: From initial approaches to methylation tests.
Freire-Aradas, A; Phillips, C; Lareu, M V
2017-07-01
Individual age estimation is a key factor in forensic science analysis that can provide very useful information applicable to criminal, legal, and anthropological investigations. Forensic age inference was initially based on morphological inspection or radiography and only later began to adopt molecular approaches. However, a lack of accuracy or technical problems hampered the introduction of these DNA-based methodologies in casework analysis. A turning point occurred when the epigenetic signature of DNA methylation was observed to gradually change during an individual´s lifespan. In the last four years, the number of publications reporting DNA methylation age-correlated changes has gradually risen and the forensic community now has a range of age methylation tests applicable to forensic casework. Most forensic age predictor models have been developed based on blood DNA samples, but additional tissues are now also being explored. This review assesses the most widely adopted genes harboring methylation sites, detection technologies, statistical age-predictive analyses, and potential causes of variation in age estimates. Despite the need for further work to improve predictive accuracy and establishing a broader range of tissues for which tests can analyze the most appropriate methylation sites, several forensic age predictors have now been reported that provide consistency in their prediction accuracies (predictive error of ±4 years); this makes them compelling tools with the potential to contribute key information to help guide criminal investigations. Copyright © 2017 Central Police University.
NASA Astrophysics Data System (ADS)
Herkül, Kristjan; Peterson, Anneliis; Paekivi, Sander
2017-06-01
Both basic science and marine spatial planning are in a need of high resolution spatially continuous data on seabed habitats and biota. As conventional point-wise sampling is unable to cover large spatial extents in high detail, it must be supplemented with remote sensing and modeling in order to fulfill the scientific and management needs. The combined use of in situ sampling, sonar scanning, and mathematical modeling is becoming the main method for mapping both abiotic and biotic seabed features. Further development and testing of the methods in varying locations and environmental settings is essential for moving towards unified and generally accepted methodology. To fill the relevant research gap in the Baltic Sea, we used multibeam sonar and mathematical modeling methods - generalized additive models (GAM) and random forest (RF) - together with underwater video to map seabed substrate and epibenthos of offshore shallows. In addition to testing the general applicability of the proposed complex of techniques, the predictive power of different sonar-based variables and modeling algorithms were tested. Mean depth, followed by mean backscatter, were the most influential variables in most of the models. Generally, mean values of sonar-based variables had higher predictive power than their standard deviations. The predictive accuracy of RF was higher than that of GAM. To conclude, we found the method to be feasible and with predictive accuracy similar to previous studies of sonar-based mapping.
Experimental evaluation of a recursive model identification technique for type 1 diabetes.
Finan, Daniel A; Doyle, Francis J; Palerm, Cesar C; Bevier, Wendy C; Zisser, Howard C; Jovanovic, Lois; Seborg, Dale E
2009-09-01
A model-based controller for an artificial beta cell requires an accurate model of the glucose-insulin dynamics in type 1 diabetes subjects. To ensure the robustness of the controller for changing conditions (e.g., changes in insulin sensitivity due to illnesses, changes in exercise habits, or changes in stress levels), the model should be able to adapt to the new conditions by means of a recursive parameter estimation technique. Such an adaptive strategy will ensure that the most accurate model is used for the current conditions, and thus the most accurate model predictions are used in model-based control calculations. In a retrospective analysis, empirical dynamic autoregressive exogenous input (ARX) models were identified from glucose-insulin data for nine type 1 diabetes subjects in ambulatory conditions. Data sets consisted of continuous (5-minute) glucose concentration measurements obtained from a continuous glucose monitor, basal insulin infusion rates and times and amounts of insulin boluses obtained from the subjects' insulin pumps, and subject-reported estimates of the times and carbohydrate content of meals. Two identification techniques were investigated: nonrecursive, or batch methods, and recursive methods. Batch models were identified from a set of training data, whereas recursively identified models were updated at each sampling instant. Both types of models were used to make predictions of new test data. For the purpose of comparison, model predictions were compared to zero-order hold (ZOH) predictions, which were made by simply holding the current glucose value constant for p steps into the future, where p is the prediction horizon. Thus, the ZOH predictions are model free and provide a base case for the prediction metrics used to quantify the accuracy of the model predictions. In theory, recursive identification techniques are needed only when there are changing conditions in the subject that require model adaptation. Thus, the identification and validation techniques were performed with both "normal" data and data collected during conditions of reduced insulin sensitivity. The latter were achieved by having the subjects self-administer a medication, prednisone, for 3 consecutive days. The recursive models were allowed to adapt to this condition of reduced insulin sensitivity, while the batch models were only identified from normal data. Data from nine type 1 diabetes subjects in ambulatory conditions were analyzed; six of these subjects also participated in the prednisone portion of the study. For normal test data, the batch ARX models produced 30-, 45-, and 60-minute-ahead predictions that had average root mean square error (RMSE) values of 26, 34, and 40 mg/dl, respectively. For test data characterized by reduced insulin sensitivity, the batch ARX models produced 30-, 60-, and 90-minute-ahead predictions with average RMSE values of 27, 46, and 59 mg/dl, respectively; the recursive ARX models demonstrated similar performance with corresponding values of 27, 45, and 61 mg/dl, respectively. The identified ARX models (batch and recursive) produced more accurate predictions than the model-free ZOH predictions, but only marginally. For test data characterized by reduced insulin sensitivity, RMSE values for the predictions of the batch ARX models were 9, 5, and 5% more accurate than the ZOH predictions for prediction horizons of 30, 60, and 90 minutes, respectively. In terms of RMSE values, the 30-, 60-, and 90-minute predictions of the recursive models were more accurate than the ZOH predictions, by 10, 5, and 2%, respectively. In this experimental study, the recursively identified ARX models resulted in predictions of test data that were similar, but not superior, to the batch models. Even for the test data characteristic of reduced insulin sensitivity, the batch and recursive models demonstrated similar prediction accuracy. The predictions of the identified ARX models were only marginally more accurate than the model-free ZOH predictions. Given the simplicity of the ARX models and the computational ease with which they are identified, however, even modest improvements may justify the use of these models in a model-based controller for an artificial beta cell. 2009 Diabetes Technology Society.
Mark E. Harmon; Robert J. Pabst
2015-01-01
Question: Many predictions about forest succession have been based on chronosequences. Are these predictions â at the population, community and ecosystemlevel â consistent with long-termmeasurements in permanent plots? Location: Pseudotsuga menziesii (Mirb.) Franco dominated forest in western Oregon, US.Methods: Over a 100-yr period,...
Hatanaka, N; Yamamoto, Y; Ichihara, K; Mastuo, S; Nakamura, Y; Watanabe, M; Iwatani, Y
2008-04-01
Various scales have been devised to predict development of pressure ulcers on the basis of clinical and laboratory data, such as the Braden Scale (Braden score), which is used to monitor activity and skin conditions of bedridden patients. However, none of these scales facilitates clinically reliable prediction. To develop a clinical laboratory data-based predictive equation for the development of pressure ulcers. Subjects were 149 hospitalised patients with respiratory disorders who were monitored for the development of pressure ulcers over a 3-month period. The proportional hazards model (Cox regression) was used to analyse the results of 12 basic laboratory tests on the day of hospitalisation in comparison with Braden score. Pressure ulcers developed in 38 patients within the study period. A Cox regression model consisting solely of Braden scale items showed that none of these items contributed to significantly predicting pressure ulcers. Rather, a combination of haemoglobin (Hb), C-reactive protein (CRP), albumin (Alb), age, and gender produced the best model for prediction. Using the set of explanatory variables, we created a new indicator based on a multiple logistic regression equation. The new indicator showed high sensitivity (0.73) and specificity (0.70), and its diagnostic power was higher than that of Alb, Hb, CRP, or the Braden score alone. The new indicator may become a more useful clinical tool for predicting presser ulcers than Braden score. The new indicator warrants verification studies to facilitate its clinical implementation in the future.
Biewener, Andrew A.; Wakeling, James M.
2017-01-01
ABSTRACT Hill-type models are ubiquitous in the field of biomechanics, providing estimates of a muscle's force as a function of its activation state and its assumed force–length and force–velocity properties. However, despite their routine use, the accuracy with which Hill-type models predict the forces generated by muscles during submaximal, dynamic tasks remains largely unknown. This study compared human gastrocnemius forces predicted by Hill-type models with the forces estimated from ultrasound-based measures of tendon length changes and stiffness during cycling, over a range of loads and cadences. We tested both a traditional model, with one contractile element, and a differential model, with two contractile elements that accounted for independent contributions of slow and fast muscle fibres. Both models were driven by subject-specific, ultrasound-based measures of fascicle lengths, velocities and pennation angles and by activation patterns of slow and fast muscle fibres derived from surface electromyographic recordings. The models predicted, on average, 54% of the time-varying gastrocnemius forces estimated from the ultrasound-based methods. However, differences between predicted and estimated forces were smaller under low speed–high activation conditions, with models able to predict nearly 80% of the gastrocnemius force over a complete pedal cycle. Additionally, the predictions from the Hill-type muscle models tested here showed that a similar pattern of force production could be achieved for most conditions with and without accounting for the independent contributions of different muscle fibre types. PMID:28202584
Mandelker, Diana; Zhang, Liying; Kemel, Yelena; Stadler, Zsofia K; Joseph, Vijai; Zehir, Ahmet; Pradhan, Nisha; Arnold, Angela; Walsh, Michael F; Li, Yirong; Balakrishnan, Anoop R; Syed, Aijazuddin; Prasad, Meera; Nafa, Khedoudja; Carlo, Maria I; Cadoo, Karen A; Sheehan, Meg; Fleischut, Megan H; Salo-Mullen, Erin; Trottier, Magan; Lipkin, Steven M; Lincoln, Anne; Mukherjee, Semanti; Ravichandran, Vignesh; Cambria, Roy; Galle, Jesse; Abida, Wassim; Arcila, Marcia E; Benayed, Ryma; Shah, Ronak; Yu, Kenneth; Bajorin, Dean F; Coleman, Jonathan A; Leach, Steven D; Lowery, Maeve A; Garcia-Aguilar, Julio; Kantoff, Philip W; Sawyers, Charles L; Dickler, Maura N; Saltz, Leonard; Motzer, Robert J; O'Reilly, Eileen M; Scher, Howard I; Baselga, Jose; Klimstra, David S; Solit, David B; Hyman, David M; Berger, Michael F; Ladanyi, Marc; Robson, Mark E; Offit, Kenneth
2017-09-05
Guidelines for cancer genetic testing based on family history may miss clinically actionable genetic changes with established implications for cancer screening or prevention. To determine the proportion and potential clinical implications of inherited variants detected using simultaneous sequencing of the tumor and normal tissue ("tumor-normal sequencing") compared with genetic test results based on current guidelines. From January 2014 until May 2016 at Memorial Sloan Kettering Cancer Center, 10 336 patients consented to tumor DNA sequencing. Since May 2015, 1040 of these patients with advanced cancer were referred by their oncologists for germline analysis of 76 cancer predisposition genes. Patients with clinically actionable inherited mutations whose genetic test results would not have been predicted by published decision rules were identified. Follow-up for potential clinical implications of mutation detection was through May 2017. Tumor and germline sequencing compared with the predicted yield of targeted germline sequencing based on clinical guidelines. Proportion of clinically actionable germline mutations detected by universal tumor-normal sequencing that would not have been detected by guideline-directed testing. Of 1040 patients, the median age was 58 years (interquartile range, 50.5-66 years), 65.3% were male, and 81.3% had stage IV disease at the time of genomic analysis, with prostate, renal, pancreatic, breast, and colon cancer as the most common diagnoses. Of the 1040 patients, 182 (17.5%; 95% CI, 15.3%-19.9%) had clinically actionable mutations conferring cancer susceptibility, including 149 with moderate- to high-penetrance mutations; 101 patients tested (9.7%; 95% CI, 8.1%-11.7%) would not have had these mutations detected using clinical guidelines, including 65 with moderate- to high-penetrance mutations. Frequency of inherited mutations was related to case mix, stage, and founder mutations. Germline findings led to discussion or initiation of change to targeted therapy in 38 patients tested (3.7%) and predictive testing in the families of 13 individuals (1.3%), including 6 for whom genetic evaluation would not have been initiated by guideline-based testing. In this referral population with selected advanced cancers, universal sequencing of a broad panel of cancer-related genes in paired germline and tumor DNA samples was associated with increased detection of individuals with potentially clinically significant heritable mutations over the predicted yield of targeted germline testing based on current clinical guidelines. Knowledge of these additional mutations can help guide therapeutic and preventive interventions, but whether all of these interventions would improve outcomes for patients with cancer or their family members requires further study. clinicaltrials.gov Identifier: NCT01775072.
A new condition for assessing the clinical efficiency of a diagnostic test.
Bokhari, Ehsan; Hubert, Lawrence
2015-09-01
When prediction using a diagnostic test outperforms simple prediction using base rates, the test is said to be "clinically efficient," a term first introduced into the literature by Meehl and Rosen (1955) in Psychological Bulletin. This article provides three equivalent conditions for determining the clinical efficiency of a diagnostic test: (a) Meehl-Rosen (Meehl & Rosen, 1955); (b) Dawes (Dawes, 1962); and (c) the Bokhari-Hubert condition, introduced here for the first time. Clinical efficiency is then generalized to situations where misclassification costs are considered unequal (for example, false negatives are more costly than false positives). As an illustration, the clinical efficiency of an actuarial device for predicting violent and dangerous behavior is examined that was developed as part of the MacArthur Violence Risk Assessment Study. (c) 2015 APA, all rights reserved.
International Space Station Bacteria Filter Element Post-Flight Testing and Service Life Prediction
NASA Technical Reports Server (NTRS)
Perry, J. L.; von Jouanne, R. G.; Turner, E. H.
2003-01-01
The International Space Station uses high efficiency particulate air (HEPA) filters to remove particulate matter from the cabin atmosphere. Known as Bacteria Filter Elements (BFEs), there are 13 elements deployed on board the ISS's U.S. Segment. The pre-flight service life prediction of 1 year for the BFEs is based upon performance engineering analysis of data collected during developmental testing that used a synthetic dust challenge. While this challenge is considered reasonable and conservative from a design perspective, an understanding of the actual filter loading is required to best manage the critical ISS Program resources. Thus testing was conducted on BFEs returned from the ISS to refine the service life prediction. Results from this testing and implications to ISS resource management are discussed. Recommendations for realizing significant savings to the ISS Program are presented.
Scoring in genetically modified organism proficiency tests based on log-transformed results.
Thompson, Michael; Ellison, Stephen L R; Owen, Linda; Mathieson, Kenneth; Powell, Joanne; Key, Pauline; Wood, Roger; Damant, Andrew P
2006-01-01
The study considers data from 2 UK-based proficiency schemes and includes data from a total of 29 rounds and 43 test materials over a period of 3 years. The results from the 2 schemes are similar and reinforce each other. The amplification process used in quantitative polymerase chain reaction determinations predicts a mixture of normal, binomial, and lognormal distributions dominated by the latter 2. As predicted, the study results consistently follow a positively skewed distribution. Log-transformation prior to calculating z-scores is effective in establishing near-symmetric distributions that are sufficiently close to normal to justify interpretation on the basis of the normal distribution.
The Growth of Multi-Site Fatigue Damage in Fuselage Lap Joints
NASA Technical Reports Server (NTRS)
Piascik, Robert S.; Willard, Scott A.
1999-01-01
Destructive examinations were performed to document the progression of multi-site damage (MSD) in three lap joint panels that were removed from a full scale fuselage test article that was tested to 60,000 full pressurization cycles. Similar fatigue crack growth characteristics were observed for small cracks (50 microns to 10 mm) emanating from counter bore rivets, straight shank rivets, and 100 deg counter sink rivets. Good correlation of the fatigue crack growth data base obtained in this study and FASTRAN Code predictions show that the growth of MSD in the fuselage lap joint structure can be predicted by fracture mechanics based methods.
Katoh, Masakazu; Hamajima, Fumiyasu; Ogasawara, Takahiro; Hata, Ken-Ichiro
2010-06-01
A new OECD test guideline 431 (TG431) for in vitro skin corrosion tests using human reconstructed skin models was adopted by OECD in 2004. TG431 defines the criteria for the general function and performance of applicable skin models. In order to confirm that the new reconstructed human epidermal model, LabCyte EPI-MODEL is applicable for the skin corrosion test according to TG431, the predictability and repeatability of the model for the skin corrosion test was evaluated. The test was performed according to the test protocol described in TG431. Based on the knowledge that LabCyte EPI-MODEL is an epidermal model as well as EpiDerm, we decided to adopt the the Epiderm prediction model of skin corrosion for the LabCyte EPI-MODEL, using twenty test chemicals (10 corrosive chemicals and 10 non-corrosive chemicals) in the 1(st) stage. The prediction model results showed that the distinction of non-corrosion to corrosion corresponded perfectly. Therefore, it was judged that the prediction model of EpiDerm could be applied to the LabCyte EPI-MODEL. In the 2(nd) stage, the repeatability of this test protocol with the LabCyte EPI-MODEL was examined using twelve chemicals (6 corrosive chemicals and 6 non-corrosive chemicals) that are described in TG431, and these results recognized a high repeatability and accurate predictability. It was concluded that LabCyte EPI-MODEL is applicable for the skin corrosive test protocol according to TG431.
Cunha-Cruz, Joana; Milgrom, Peter; Shirtcliff, R Michael; Bailit, Howard L; Huebner, Colleen E; Conrad, Douglas; Ludwig, Sharity; Mitchell, Melissa; Dysert, Jeanne; Allen, Gary; Scott, JoAnna; Mancl, Lloyd
2015-06-20
To improve the oral health of low-income children, innovations in dental delivery systems are needed, including community-based care, the use of expanded duty auxiliary dental personnel, capitation payments, and global budgets. This paper describes the protocol for PREDICT (Population-centered Risk- and Evidence-based Dental Interprofessional Care Team), an evaluation project to test the effectiveness of new delivery and payment systems for improving dental care and oral health. This is a parallel-group cluster randomized controlled trial. Fourteen rural Oregon counties with a publicly insured (Medicaid) population of 82,000 children (0 to 21 years old) and pregnant women served by a managed dental care organization are randomized into test and control counties. In the test intervention (PREDICT), allied dental personnel provide screening and preventive services in community settings and case managers serve as patient navigators to arrange referrals of children who need dentist services. The delivery system intervention is paired with a compensation system for high performance (pay-for-performance) with efficient performance monitoring. PREDICT focuses on the following: 1) identifying eligible children and gaining caregiver consent for services in community settings (for example, schools); 2) providing risk-based preventive and caries stabilization services efficiently at these settings; 3) providing curative care in dental clinics; and 4) incentivizing local delivery teams to meet performance benchmarks. In the control intervention, care is delivered in dental offices without performance incentives. The primary outcome is the prevalence of untreated dental caries. Other outcomes are related to process, structure and cost. Data are collected through patient and staff surveys, clinical examinations, and the review of health and administrative records. If effective, PREDICT is expected to substantially reduce disparities in dental care and oral health. PREDICT can be disseminated to other care organizations as publicly insured clients are increasingly served by large practice organizations. ClinicalTrials.gov NCT02312921 6 December 2014. The Robert Wood Johnson Foundation and Advantage Dental Services, LLC, are supporting the evaluation.
Redundancy and Reduction: Speakers Manage Syntactic Information Density
ERIC Educational Resources Information Center
Jaeger, T. Florian
2010-01-01
A principle of efficient language production based on information theoretic considerations is proposed: Uniform Information Density predicts that language production is affected by a preference to distribute information uniformly across the linguistic signal. This prediction is tested against data from syntactic reduction. A single multilevel…
Si, Guo-Ning; Chen, Lan; Li, Bao-Guo
2014-04-01
Base on the Kawakita powder compression equation, a general theoretical model for predicting the compression characteristics of multi-components pharmaceutical powders with different mass ratios was developed. The uniaxial flat-face compression tests of powder lactose, starch and microcrystalline cellulose were carried out, separately. Therefore, the Kawakita equation parameters of the powder materials were obtained. The uniaxial flat-face compression tests of the powder mixtures of lactose, starch, microcrystalline cellulose and sodium stearyl fumarate with five mass ratios were conducted, through which, the correlation between mixture density and loading pressure and the Kawakita equation curves were obtained. Finally, the theoretical prediction values were compared with experimental results. The analysis showed that the errors in predicting mixture densities were less than 5.0% and the errors of Kawakita vertical coordinate were within 4.6%, which indicated that the theoretical model could be used to predict the direct compaction characteristics of multi-component pharmaceutical powders.
Testing the adaptive radiation hypothesis for the lemurs of Madagascar.
Herrera, James P
2017-01-01
Lemurs, the diverse, endemic primates of Madagascar, are thought to represent a classic example of adaptive radiation. Based on the most complete phylogeny of living and extinct lemurs yet assembled, I tested predictions of adaptive radiation theory by estimating rates of speciation, extinction and adaptive phenotypic evolution. As predicted, lemur speciation rate exceeded that of their sister clade by nearly twofold, indicating the diversification dynamics of lemurs and mainland relatives may have been decoupled. Lemur diversification rates did not decline over time, however, as predicted by adaptive radiation theory. Optimal body masses diverged among dietary and activity pattern niches as lineages diversified into unique multidimensional ecospace. Based on these results, lemurs only partially fulfil the predictions of adaptive radiation theory, with phenotypic evolution corresponding to an 'early burst' of adaptive differentiation. The results must be interpreted with caution, however, because over the long evolutionary history of lemurs (approx. 50 million years), the 'early burst' signal of adaptive radiation may have been eroded by extinction.
Testing the adaptive radiation hypothesis for the lemurs of Madagascar
2017-01-01
Lemurs, the diverse, endemic primates of Madagascar, are thought to represent a classic example of adaptive radiation. Based on the most complete phylogeny of living and extinct lemurs yet assembled, I tested predictions of adaptive radiation theory by estimating rates of speciation, extinction and adaptive phenotypic evolution. As predicted, lemur speciation rate exceeded that of their sister clade by nearly twofold, indicating the diversification dynamics of lemurs and mainland relatives may have been decoupled. Lemur diversification rates did not decline over time, however, as predicted by adaptive radiation theory. Optimal body masses diverged among dietary and activity pattern niches as lineages diversified into unique multidimensional ecospace. Based on these results, lemurs only partially fulfil the predictions of adaptive radiation theory, with phenotypic evolution corresponding to an ‘early burst’ of adaptive differentiation. The results must be interpreted with caution, however, because over the long evolutionary history of lemurs (approx. 50 million years), the ‘early burst’ signal of adaptive radiation may have been eroded by extinction. PMID:28280597
Pseudo CT estimation from MRI using patch-based random forest
NASA Astrophysics Data System (ADS)
Yang, Xiaofeng; Lei, Yang; Shu, Hui-Kuo; Rossi, Peter; Mao, Hui; Shim, Hyunsuk; Curran, Walter J.; Liu, Tian
2017-02-01
Recently, MR simulators gain popularity because of unnecessary radiation exposure of CT simulators being used in radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on a patch-based random forest. Patient-specific anatomical features are extracted from the aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified using feature selection to train the random forest. The well-trained random forest is used to predict the pseudo CT of a new patient. This prediction technique was tested with human brain images and the prediction accuracy was assessed using the original CT images. Peak signal-to-noise ratio (PSNR) and feature similarity (FSIM) indexes were used to quantify the differences between the pseudo and original CT images. The experimental results showed the proposed method could accurately generate pseudo CT images from MR images. In summary, we have developed a new pseudo CT prediction method based on patch-based random forest, demonstrated its clinical feasibility, and validated its prediction accuracy. This pseudo CT prediction technique could be a useful tool for MRI-based radiation treatment planning and attenuation correction in a PET/MRI scanner.
Evaluating the accuracy of SHAPE-directed RNA secondary structure predictions
Sükösd, Zsuzsanna; Swenson, M. Shel; Kjems, Jørgen; Heitsch, Christine E.
2013-01-01
Recent advances in RNA structure determination include using data from high-throughput probing experiments to improve thermodynamic prediction accuracy. We evaluate the extent and nature of improvements in data-directed predictions for a diverse set of 16S/18S ribosomal sequences using a stochastic model of experimental SHAPE data. The average accuracy for 1000 data-directed predictions always improves over the original minimum free energy (MFE) structure. However, the amount of improvement varies with the sequence, exhibiting a correlation with MFE accuracy. Further analysis of this correlation shows that accurate MFE base pairs are typically preserved in a data-directed prediction, whereas inaccurate ones are not. Thus, the positive predictive value of common base pairs is consistently higher than the directed prediction accuracy. Finally, we confirm sequence dependencies in the directability of thermodynamic predictions and investigate the potential for greater accuracy improvements in the worst performing test sequence. PMID:23325843
Kim, Esther S H; Ishwaran, Hemant; Blackstone, Eugene; Lauer, Michael S
2007-11-06
The purpose of this study was to externally validate the prognostic value of age- and gender-based nomograms and categorical definitions of impaired exercise capacity (EC). Exercise capacity predicts death, but its use in routine clinical practice is hampered by its close correlation with age and gender. For a median of 5 years, we followed 22,275 patients without known heart disease who underwent symptom-limited stress testing. Models for predicted or impaired EC were identified by literature search. Gender-specific multivariable proportional hazards models were constructed. Four methods were used to assess validity: Akaike Information Criterion (AIC), right-censored c-index in 100 out-of-bootstrap samples, the Nagelkerke Index R2, and calculation of calibration error in 100 bootstrap samples. There were 646 and 430 deaths in 13,098 men and 9,177 women, respectively. Of the 7 models tested in men, a model based on a Veterans Affairs cohort (predicted metabolic equivalents [METs] = 18 - [0.15 x age]) had the highest AIC and R2. In women, a model based on the St. James Take Heart Project (predicted METs = 14.7 - [0.13 x age]) performed best. Categorical definitions of fitness performed less well. Even after accounting for age and gender, there was still an important interaction with age, whereby predicted EC was a weaker predictor in older subjects (p for interaction <0.001 in men and 0.003 in women). Several methods describe EC accounting for age and gender-related differences, but their ability to predict mortality differ. Simple cutoff values fail to fully describe EC's strong predictive value.
ERIC Educational Resources Information Center
Yeo, Seungsoo
2010-01-01
The purpose of this synthesis was to examine the relationship between Curriculum-Based Measurement (CBM) and statewide achievement tests in reading. A multilevel meta-analysis was used to calculate the correlation coefficient of the population for 27 studies that met the inclusion criteria. Results showed an overall large correlation coefficient…
77 FR 66149 - Significant New Use Rules on Certain Chemical Substances
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-02
... ecological structural activity relationship (EcoSAR) analysis of test data on analogous esters, EPA predicts... milligram/cubic meter (mg/m\\3\\) as an 8-hour time-weighted average. In addition, based on EcoSAR analysis of... the PMN substance via the inhalation route. In addition, based on EcoSAR analysis of test data on...
Kufa, Tendesayi; Kharsany, Ayesha BM; Cawood, Cherie; Khanyile, David; Lewis, Lara; Grobler, Anneke; Chipeta, Zawadi; Bere, Alfred; Glenshaw, Mary; Puren, Adrian
2017-01-01
Abstract Introduction: We describe the overall accuracy and performance of a serial rapid HIV testing algorithm used in community-based HIV testing in the context of a population-based household survey conducted in two sub-districts of uMgungundlovu district, KwaZulu-Natal, South Africa, against reference fourth-generation HIV-1/2 antibody and p24 antigen combination immunoassays. We discuss implications of the findings on rapid HIV testing programmes. Methods: Cross-sectional design: Following enrolment into the survey, questionnaires were administered to eligible and consenting participants in order to obtain demographic and HIV-related data. Peripheral blood samples were collected for HIV-related testing. Participants were offered community-based HIV testing in the home by trained field workers using a serial algorithm with two rapid diagnostic tests (RDTs) in series. In the laboratory, reference HIV testing was conducted using two fourth-generation immunoassays with all positives in the confirmatory test considered true positives. Accuracy, sensitivity, specificity, positive predictive value, negative predictive value and false-positive and false-negative rates were determined. Results: Of 10,236 individuals enrolled in the survey, 3740 were tested in the home (median age 24 years (interquartile range 19–31 years), 42.1% males and HIV positivity on RDT algorithm 8.0%). From those tested, 3729 (99.7%) had a definitive RDT result as well as a laboratory immunoassay result. The overall accuracy of the RDT when compared to the fourth-generation immunoassays was 98.8% (95% confidence interval (CI) 98.5–99.2). The sensitivity, specificity, positive predictive value and negative predictive value were 91.1% (95% CI 87.5–93.7), 99.9% (95% CI 99.8–100), 99.3% (95% CI 97.4–99.8) and 99.1% (95% CI 98.8–99.4) respectively. The false-positive and false-negative rates were 0.06% (95% CI 0.01–0.24) and 8.9% (95% CI 6.3–12.53). Compared to true positives, false negatives were more likely to be recently infected on limited antigen avidity assay and to report antiretroviral therapy (ART) use. Conclusions: The overall accuracy of the RDT algorithm was high. However, there were few false positives, and the sensitivity was lower than expected with high false negatives, despite implementation of quality assurance measures. False negatives were associated with recent (early) infection and ART exposure. The RDT algorithm was able to correctly identify the majority of HIV infections in community-based HIV testing. Messaging on the potential for false positives and false negatives should be included in these programmes. PMID:28872274
Cater, Kathleen C; Harbell, John W
2008-01-01
The bovine corneal opacity and permeability (BCOP) assay can be used to predict relative eye irritation potential of surfactant-based personal care formulations relative to a corporate benchmark. The human eye sting test is typically used to evaluate product claims of no tears/no stinging for children's bath products. A preliminary investigation was conducted to test a hypothesis that the BCOP assay could be used as a prediction model for relative ranking of human eye irritation responses under conditions of a standard human eye sting test to surfactant-based formulations. BCOP assays and human eye sting tests were conducted on 4 commercial and 1 prototype body wash (BW) developed specifically for children or as mild bath products. In the human eye sting test, 10 mul of a 10% dosing solution is instilled into one eye of each panelist (n = 20), and the contralateral eye is dosed with sterile water as a control. Bulbar conjunctival erythema responses of each eye are graded at 30 seconds by an ophthalmologist. The BCOP assay permeability values (optical density at 490 nm [OD(490)]) for the 5 BWs ranged from 0.438 to 1.252 (i.e., least to most irritating). By comparison, the number of panelists exhibiting erythema responses (mild to moderately pink) ranged from 3 of 20 panelists for the least irritating BW to 10 of 20 panelists for the most irritating BW tested. The relative ranking of eye irritation potential of the 5 BWs in the BCOP assay compares favorably with the relative ranking of the BWs in the human eye sting test. Based on these findings, the permeability endpoint of the BCOP assay, as described for surfactant-based formulations, showed promise as a prediction model for relative ranking of conjunctival erythema responses in the human eye. Consequently, screening of prototype formulations in the BCOP assay would allow for formula optimization of mild bath products prior to investment in a human eye sting test.
Mathematical learning models that depend on prior knowledge and instructional strategies
NASA Astrophysics Data System (ADS)
Pritchard, David E.; Lee, Young-Jin; Bao, Lei
2008-06-01
We present mathematical learning models—predictions of student’s knowledge vs amount of instruction—that are based on assumptions motivated by various theories of learning: tabula rasa, constructivist, and tutoring. These models predict the improvement (on the post-test) as a function of the pretest score due to intervening instruction and also depend on the type of instruction. We introduce a connectedness model whose connectedness parameter measures the degree to which the rate of learning is proportional to prior knowledge. Over a wide range of pretest scores on standard tests of introductory physics concepts, it fits high-quality data nearly within error. We suggest that data from MIT have low connectedness (indicating memory-based learning) because the test used the same context and representation as the instruction and that more connected data from the University of Minnesota resulted from instruction in a different representation from the test.
NASA Technical Reports Server (NTRS)
Lyle, Karen H.
2008-01-01
The Space Shuttle Columbia Accident Investigation Board recommended that NASA develop, validate, and maintain a modeling tool capable of predicting the damage threshold for debris impacts on the Space Shuttle Reinforced Carbon-Carbon (RCC) wing leading edge and nosecap assembly. The results presented in this paper are one part of a multi-level approach that supported the development of the predictive tool used to recertify the shuttle for flight following the Columbia Accident. The assessment of predictive capability was largely based on test analysis comparisons for simpler component structures. This paper provides comparisons of finite element simulations with test data for external tank foam debris impacts onto 6-in. square RCC flat panels. Both quantitative displacement and qualitative damage assessment correlations are provided. The comparisons show good agreement and provided the Space Shuttle Program with confidence in the predictive tool.
Bitter or not? BitterPredict, a tool for predicting taste from chemical structure.
Dagan-Wiener, Ayana; Nissim, Ido; Ben Abu, Natalie; Borgonovo, Gigliola; Bassoli, Angela; Niv, Masha Y
2017-09-21
Bitter taste is an innately aversive taste modality that is considered to protect animals from consuming toxic compounds. Yet, bitterness is not always noxious and some bitter compounds have beneficial effects on health. Hundreds of bitter compounds were reported (and are accessible via the BitterDB http://bitterdb.agri.huji.ac.il/dbbitter.php ), but numerous additional bitter molecules are still unknown. The dramatic chemical diversity of bitterants makes bitterness prediction a difficult task. Here we present a machine learning classifier, BitterPredict, which predicts whether a compound is bitter or not, based on its chemical structure. BitterDB was used as the positive set, and non-bitter molecules were gathered from literature to create the negative set. Adaptive Boosting (AdaBoost), based on decision trees machine-learning algorithm was applied to molecules that were represented using physicochemical and ADME/Tox descriptors. BitterPredict correctly classifies over 80% of the compounds in the hold-out test set, and 70-90% of the compounds in three independent external sets and in sensory test validation, providing a quick and reliable tool for classifying large sets of compounds into bitter and non-bitter groups. BitterPredict suggests that about 40% of random molecules, and a large portion (66%) of clinical and experimental drugs, and of natural products (77%) are bitter.
Analyst-to-Analyst Variability in Simulation-Based Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glickman, Matthew R.; Romero, Vicente J.
This report describes findings from the culminating experiment of the LDRD project entitled, "Analyst-to-Analyst Variability in Simulation-Based Prediction". For this experiment, volunteer participants solving a given test problem in engineering and statistics were interviewed at different points in their solution process. These interviews are used to trace differing solutions to differing solution processes, and differing processes to differences in reasoning, assumptions, and judgments. The issue that the experiment was designed to illuminate -- our paucity of understanding of the ways in which humans themselves have an impact on predictions derived from complex computational simulations -- is a challenging and openmore » one. Although solution of the test problem by analyst participants in this experiment has taken much more time than originally anticipated, and is continuing past the end of this LDRD, this project has provided a rare opportunity to explore analyst-to-analyst variability in significant depth, from which we derive evidence-based insights to guide further explorations in this important area.« less
Passenger ride quality determined from commercial airline flights
NASA Technical Reports Server (NTRS)
Richards, L. G.; Kuhlthau, A. R.; Jacobson, I. D.
1975-01-01
The University of Virginia ride-quality research program is reviewed. Data from two flight programs, involving seven types of aircraft, are considered in detail. An apparatus for measuring physical variations in the flight environment and recording the subjective reactions of test subjects is described. Models are presented for predicting the comfort response of test subjects from the physical data, and predicting the overall comfort reaction of test subjects from their moment by moment responses. The correspondence of mean passenger comfort judgments and test subject response is shown. Finally, the models of comfort response based on data from the 5-point and 7-point comfort scales are shown to correspond.
Abbreviated neuropsychological assessment in schizophrenia
Harvey, Philip D.; Keefe, Richard S. E.; Patterson, Thomas L.; Heaton, Robert K.; Bowie, Christopher R.
2008-01-01
The aim of this study was to identify the best subset of neuropsychological tests for prediction of several different aspects of functioning in a large (n = 236) sample of older people with schizophrenia. While the validity of abbreviated assessment methods has been examined before, there has never been a comparative study of the prediction of different elements of cognitive impairment, real-world outcomes, and performance-based measures of functional capacity. Scores on 10 different tests from a neuropsychological assessment battery were used to predict global neuropsychological (NP) performance (indexed with averaged scores or calculated general deficit scores), performance-based indices of everyday-living skills and social competence, and case-manager ratings of real-world functioning. Forward entry stepwise regression analyses were used to identify the best predictors for each of the outcomes measures. Then, the analyses were adjusted for estimated premorbid IQ, which reduced the magnitude, but not the structure, of the correlations. Substantial amounts (over 70%) of the variance in overall NP performance were accounted for by a limited number of NP tests. Considerable variance in measures of functional capacity was also accounted for by a limited number of tests. Different tests constituted the best predictor set for each outcome measure. A substantial proportion of the variance in several different NP and functional outcomes can be accounted for by a small number of NP tests that can be completed in a few minutes, although there is considerable unexplained variance. However, the abbreviated assessments that best predict different outcomes vary across outcomes. Future studies should determine whether responses to pharmacological and remediation treatments can be captured with brief assessments as well. PMID:18720182
Cryogenic Tank Modeling for the Saturn AS-203 Experiment
NASA Technical Reports Server (NTRS)
Grayson, Gary D.; Lopez, Alfredo; Chandler, Frank O.; Hastings, Leon J.; Tucker, Stephen P.
2006-01-01
A computational fluid dynamics (CFD) model is developed for the Saturn S-IVB liquid hydrogen (LH2) tank to simulate the 1966 AS-203 flight experiment. This significant experiment is the only known, adequately-instrumented, low-gravity, cryogenic self pressurization test that is well suited for CFD model validation. A 4000-cell, axisymmetric model predicts motion of the LH2 surface including boil-off and thermal stratification in the liquid and gas phases. The model is based on a modified version of the commercially available FLOW3D software. During the experiment, heat enters the LH2 tank through the tank forward dome, side wall, aft dome, and common bulkhead. In both model and test the liquid and gases thermally stratify in the low-gravity natural convection environment. LH2 boils at the free surface which in turn increases the pressure within the tank during the 5360 second experiment. The Saturn S-IVB tank model is shown to accurately simulate the self pressurization and thermal stratification in the 1966 AS-203 test. The average predicted pressurization rate is within 4% of the pressure rise rate suggested by test data. Ullage temperature results are also in good agreement with the test where the model predicts an ullage temperature rise rate within 6% of the measured data. The model is based on first principles only and includes no adjustments to bring the predictions closer to the test data. Although quantitative model validation is achieved or one specific case, a significant step is taken towards demonstrating general use of CFD for low-gravity cryogenic fluid modeling.
NASA Astrophysics Data System (ADS)
Harken, B.; Geiges, A.; Rubin, Y.
2013-12-01
There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and forward modeling and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration, plume travel time, or aquifer recharge rate. These predictions often have significant bearing on some decision that must be made. Examples include: how to allocate limited remediation resources between multiple contaminated groundwater sites, where to place a waste repository site, and what extraction rates can be considered sustainable in an aquifer. Providing an answer to these questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in model parameters, such as hydraulic conductivity, leads to uncertainty in EPM predictions. Often, field campaigns and inverse modeling efforts are planned and undertaken with reduction of parametric uncertainty as the objective. The tool of hypothesis testing allows this to be taken one step further by considering uncertainty reduction in the ultimate prediction of the EPM as the objective and gives a rational basis for weighing costs and benefits at each stage. When using the tool of statistical hypothesis testing, the EPM is cast into a binary outcome. This is formulated as null and alternative hypotheses, which can be accepted and rejected with statistical formality. When accounting for all sources of uncertainty at each stage, the level of significance of this test provides a rational basis for planning, optimization, and evaluation of the entire campaign. Case-specific information, such as consequences prediction error and site-specific costs can be used in establishing selection criteria based on what level of risk is deemed acceptable. This framework is demonstrated and discussed using various synthetic case studies. The case studies involve contaminated aquifers where a decision must be made based on prediction of when a contaminant will arrive at a given location. The EPM, in this case contaminant travel time, is cast into the hypothesis testing framework. The null hypothesis states that the contaminant plume will arrive at the specified location before a critical value of time passes, and the alternative hypothesis states that the plume will arrive after the critical time passes. Different field campaigns are analyzed based on effectiveness in reducing the probability of selecting the wrong hypothesis, which in this case corresponds to reducing uncertainty in the prediction of plume arrival time. To examine the role of inverse modeling in this framework, case studies involving both Maximum Likelihood parameter estimation and Bayesian inversion are used.
Improving orbit prediction accuracy through supervised machine learning
NASA Astrophysics Data System (ADS)
Peng, Hao; Bai, Xiaoli
2018-05-01
Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.
Bonato, Matteo; Papini, Gabriele; Bosio, Andrea; Mohammed, Rahil A.; Bonomi, Alberto G.; Moore, Jonathan P.; Merati, Giampiero; La Torre, Antonio; Kubis, Hans-Peter
2016-01-01
Cardio-respiratory fitness (CRF) is a widespread essential indicator in Sports Science as well as in Sports Medicine. This study aimed to develop and validate a prediction model for CRF based on a 45 second self-test, which can be conducted anywhere. Criterion validity, test re-test study was set up to accomplish our objectives. Data from 81 healthy volunteers (age: 29 ± 8 years, BMI: 24.0 ± 2.9), 18 of whom females, were used to validate this test against gold standard. Nineteen volunteers repeated this test twice in order to evaluate its repeatability. CRF estimation models were developed using heart rate (HR) features extracted from the resting, exercise, and the recovery phase. The most predictive HR feature was the intercept of the linear equation fitting the HR values during the recovery phase normalized for the height2 (r2 = 0.30). The Ruffier-Dickson Index (RDI), which was originally developed for this squat test, showed a negative significant correlation with CRF (r = -0.40), but explained only 15% of the variability in CRF. A multivariate model based on RDI and sex, age and height increased the explained variability up to 53% with a cross validation (CV) error of 0.532 L ∙ min-1 and substantial repeatability (ICC = 0.91). The best predictive multivariate model made use of the linear intercept of HR at the beginning of the recovery normalized for height2 and age2; this had an adjusted r2 = 0. 59, a CV error of 0.495 L·min-1 and substantial repeatability (ICC = 0.93). It also had a higher agreement in classifying CRF levels (κ = 0.42) than RDI-based model (κ = 0.29). In conclusion, this simple 45 s self-test can be used to estimate and classify CRF in healthy individuals with moderate accuracy and large repeatability when HR recovery features are included. PMID:27959935
Sartor, Francesco; Bonato, Matteo; Papini, Gabriele; Bosio, Andrea; Mohammed, Rahil A; Bonomi, Alberto G; Moore, Jonathan P; Merati, Giampiero; La Torre, Antonio; Kubis, Hans-Peter
2016-01-01
Cardio-respiratory fitness (CRF) is a widespread essential indicator in Sports Science as well as in Sports Medicine. This study aimed to develop and validate a prediction model for CRF based on a 45 second self-test, which can be conducted anywhere. Criterion validity, test re-test study was set up to accomplish our objectives. Data from 81 healthy volunteers (age: 29 ± 8 years, BMI: 24.0 ± 2.9), 18 of whom females, were used to validate this test against gold standard. Nineteen volunteers repeated this test twice in order to evaluate its repeatability. CRF estimation models were developed using heart rate (HR) features extracted from the resting, exercise, and the recovery phase. The most predictive HR feature was the intercept of the linear equation fitting the HR values during the recovery phase normalized for the height2 (r2 = 0.30). The Ruffier-Dickson Index (RDI), which was originally developed for this squat test, showed a negative significant correlation with CRF (r = -0.40), but explained only 15% of the variability in CRF. A multivariate model based on RDI and sex, age and height increased the explained variability up to 53% with a cross validation (CV) error of 0.532 L ∙ min-1 and substantial repeatability (ICC = 0.91). The best predictive multivariate model made use of the linear intercept of HR at the beginning of the recovery normalized for height2 and age2; this had an adjusted r2 = 0. 59, a CV error of 0.495 L·min-1 and substantial repeatability (ICC = 0.93). It also had a higher agreement in classifying CRF levels (κ = 0.42) than RDI-based model (κ = 0.29). In conclusion, this simple 45 s self-test can be used to estimate and classify CRF in healthy individuals with moderate accuracy and large repeatability when HR recovery features are included.
Business Planning in the Light of Neuro-fuzzy and Predictive Forecasting
NASA Astrophysics Data System (ADS)
Chakrabarti, Prasun; Basu, Jayanta Kumar; Kim, Tai-Hoon
In this paper we have pointed out gain sensing on forecast based techniques.We have cited an idea of neural based gain forecasting. Testing of sequence of gain pattern is also verifies using statsistical analysis of fuzzy value assignment. The paper also suggests realization of stable gain condition using K-Means clustering of data mining. A new concept of 3D based gain sensing has been pointed out. The paper also reveals what type of trend analysis can be observed for probabilistic gain prediction.
NASA Technical Reports Server (NTRS)
Wang, John T.; Pineda, Evan J.; Ranatunga, Vipul; Smeltzer, Stanley S.
2015-01-01
A simple continuum damage mechanics (CDM) based 3D progressive damage analysis (PDA) tool for laminated composites was developed and implemented as a user defined material subroutine to link with a commercially available explicit finite element code. This PDA tool uses linear lamina properties from standard tests, predicts damage initiation with an easy-to-implement Hashin-Rotem failure criteria, and in the damage evolution phase, evaluates the degradation of material properties based on the crack band theory and traction-separation cohesive laws. It follows Matzenmiller et al.'s formulation to incorporate the degrading material properties into the damaged stiffness matrix. Since nonlinear shear and matrix stress-strain relations are not implemented, correction factors are used for slowing the reduction of the damaged shear stiffness terms to reflect the effect of these nonlinearities on the laminate strength predictions. This CDM based PDA tool is implemented as a user defined material (VUMAT) to link with the Abaqus/Explicit code. Strength predictions obtained, using this VUMAT, are correlated with test data for a set of notched specimens under tension and compression loads.
Li, Huixia; Luo, Miyang; Zheng, Jianfei; Luo, Jiayou; Zeng, Rong; Feng, Na; Du, Qiyun; Fang, Junqun
2017-02-01
An artificial neural network (ANN) model was developed to predict the risks of congenital heart disease (CHD) in pregnant women.This hospital-based case-control study involved 119 CHD cases and 239 controls all recruited from birth defect surveillance hospitals in Hunan Province between July 2013 and June 2014. All subjects were interviewed face-to-face to fill in a questionnaire that covered 36 CHD-related variables. The 358 subjects were randomly divided into a training set and a testing set at the ratio of 85:15. The training set was used to identify the significant predictors of CHD by univariate logistic regression analyses and develop a standard feed-forward back-propagation neural network (BPNN) model for the prediction of CHD. The testing set was used to test and evaluate the performance of the ANN model. Univariate logistic regression analyses were performed on SPSS 18.0. The ANN models were developed on Matlab 7.1.The univariate logistic regression identified 15 predictors that were significantly associated with CHD, including education level (odds ratio = 0.55), gravidity (1.95), parity (2.01), history of abnormal reproduction (2.49), family history of CHD (5.23), maternal chronic disease (4.19), maternal upper respiratory tract infection (2.08), environmental pollution around maternal dwelling place (3.63), maternal exposure to occupational hazards (3.53), maternal mental stress (2.48), paternal chronic disease (4.87), paternal exposure to occupational hazards (2.51), intake of vegetable/fruit (0.45), intake of fish/shrimp/meat/egg (0.59), and intake of milk/soymilk (0.55). After many trials, we selected a 3-layer BPNN model with 15, 12, and 1 neuron in the input, hidden, and output layers, respectively, as the best prediction model. The prediction model has accuracies of 0.91 and 0.86 on the training and testing sets, respectively. The sensitivity, specificity, and Yuden Index on the testing set (training set) are 0.78 (0.83), 0.90 (0.95), and 0.68 (0.78), respectively. The areas under the receiver operating curve on the testing and training sets are 0.87 and 0.97, respectively.This study suggests that the BPNN model could be used to predict the risk of CHD in individuals. This model should be further improved by large-sample-size research.
Online Soft Sensor of Humidity in PEM Fuel Cell Based on Dynamic Partial Least Squares
Long, Rong; Chen, Qihong; Zhang, Liyan; Ma, Longhua; Quan, Shuhai
2013-01-01
Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction in PEM fuel cell. In order to obtain data of humidity and test the feasibility of the proposed DPLS-based soft sensor a hardware-in-the-loop (HIL) test system is constructed. The time lag of the DPLS-based soft sensor is selected as 30 by comparing the root-mean-square error in different time lag. The performance of the proposed DPLS-based soft sensor is demonstrated by experimental results. PMID:24453923
Classification of baseline toxicants for QSAR predictions to replace fish acute toxicity studies.
Nendza, Monika; Müller, Martin; Wenzel, Andrea
2017-03-22
Fish acute toxicity studies are required for environmental hazard and risk assessment of chemicals by national and international legislations such as REACH, the regulations of plant protection products and biocidal products, or the GHS (globally harmonised system) for classification and labelling of chemicals. Alternative methods like QSARs (quantitative structure-activity relationships) can replace many ecotoxicity tests. However, complete substitution of in vivo animal tests by in silico methods may not be realistic. For the so-called baseline toxicants, it is possible to predict the fish acute toxicity with sufficient accuracy from log K ow and, hence, valid QSARs can replace in vivo testing. In contrast, excess toxicants and chemicals not reliably classified as baseline toxicants require further in silico, in vitro or in vivo assessments. Thus, the critical task is to discriminate between baseline and excess toxicants. For fish acute toxicity, we derived a scheme based on structural alerts and physicochemical property thresholds to classify chemicals as either baseline toxicants (=predictable by QSARs) or as potential excess toxicants (=not predictable by baseline QSARs). The step-wise approach identifies baseline toxicants (true negatives) in a precautionary way to avoid false negative predictions. Therefore, a certain fraction of false positives can be tolerated, i.e. baseline toxicants without specific effects that may be tested instead of predicted. Application of the classification scheme to a new heterogeneous dataset for diverse fish species results in 40% baseline toxicants, 24% excess toxicants and 36% compounds not classified. Thus, we can conclude that replacing about half of the fish acute toxicity tests by QSAR predictions is realistic to be achieved in the short-term. The long-term goals are classification criteria also for further groups of toxicants and to replace as many in vivo fish acute toxicity tests as possible with valid QSAR predictions.
Memory Binding Test Predicts Incident Dementia: Results from the Einstein Aging Study.
Mowrey, Wenzhu B; Lipton, Richard B; Katz, Mindy J; Ramratan, Wendy S; Loewenstein, David A; Zimmerman, Molly E; Buschke, Herman
2018-01-01
The Memory Binding Test (MBT) demonstrated good cross-sectional discriminative validity and predicted incident aMCI. To assess whether the MBT predicts incident dementia better than a conventional list learning test in a longitudinal community-based study. As a sub-study in the Einstein Aging Study, 309 participants age≥70 initially free of dementia were administered the MBT and followed annually for incident dementia for up to 13 years. Based on previous work, poor memory binding was defined using an optimal empirical cut-score of≤17 on the binding measure of the MBT, Total Items in the Paired condition (TIP). Cox proportional hazards models were used to assess predictive validity adjusting for covariates. We compared the predictive validity of MBT TIP to that of the free and cued selective reminding test free recall score (FCSRT-FR; cut-score:≤24) and the single list recall measure of the MBT, Cued Recalled from List 1 (CR-L1; cut-score:≤12). Thirty-five of 309 participants developed incident dementia. When assessing each test alone, the hazard ratio (HR) for dementia was significant for MBT TIP (HR = 8.58, 95% CI: (3.58, 20.58), p < 0.0001), FCSRT-FR (HR = 4.19, 95% CI: (1.94, 9.04), p = 0.0003) and MBT CR-L1 (HR = 2.91, 95% CI: (1.37, 6.18), p = 0.006). MBT TIP remained a significant predictor of dementia (p = 0.0002) when adjusting for FCSRT-FR or CR-L1. Older adults with poor memory binding as measured by the MBT TIP were at increased risk for incident dementia. This measure outperforms conventional episodic memory measures of free and cued recall, supporting the memory binding hypothesis.
Does the MCAT predict medical school and PGY-1 performance?
Saguil, Aaron; Dong, Ting; Gingerich, Robert J; Swygert, Kimberly; LaRochelle, Jeffrey S; Artino, Anthony R; Cruess, David F; Durning, Steven J
2015-04-01
The Medical College Admissions Test (MCAT) is a high-stakes test required for entry to most U. S. medical schools; admissions committees use this test to predict future accomplishment. Although there is evidence that the MCAT predicts success on multiple choice-based assessments, there is little information on whether the MCAT predicts clinical-based assessments of undergraduate and graduate medical education performance. This study looked at associations between the MCAT and medical school grade point average (GPA), Medical Licensing Examination (USMLE) scores, observed patient care encounters, and residency performance assessments. This study used data collected as part of the Long-Term Career Outcome Study to determine associations between MCAT scores, USMLE Step 1, Step 2 clinical knowledge and clinical skill, and Step 3 scores, Objective Structured Clinical Examination performance, medical school GPA, and PGY-1 program director (PD) assessment of physician performance for students graduating 2010 and 2011. MCAT data were available for all students, and the PGY PD evaluation response rate was 86.2% (N = 340). All permutations of MCAT scores (first, last, highest, average) were weakly associated with GPA, Step 2 clinical knowledge scores, and Step 3 scores. MCAT scores were weakly to moderately associated with Step 1 scores. MCAT scores were not significantly associated with Step 2 clinical skills Integrated Clinical Encounter and Communication and Interpersonal Skills subscores, Objective Structured Clinical Examination performance or PGY-1 PD evaluations. MCAT scores were weakly to moderately associated with assessments that rely on multiple choice testing. The association is somewhat stronger for assessments occurring earlier in medical school, such as USMLE Step 1. The MCAT was not able to predict assessments relying on direct clinical observation, nor was it able to predict PD assessment of PGY-1 performance. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.
Motl, Robert W; Fernhall, Bo
2012-03-01
To examine the accuracy of predicting peak oxygen consumption (VO(2peak)) primarily from peak work rate (WR(peak)) recorded during a maximal, incremental exercise test on a cycle ergometer among persons with relapsing-remitting multiple sclerosis (RRMS) who had minimal disability. Cross-sectional study. Clinical research laboratory. Women with RRMS (n=32) and sex-, age-, height-, and weight-matched healthy controls (n=16) completed an incremental exercise test on a cycle ergometer to volitional termination. Not applicable. Measured and predicted VO(2peak) and WR(peak). There were strong, statistically significant associations between measured and predicted VO(2peak) in the overall sample (R(2)=.89, standard error of the estimate=127.4 mL/min) and subsamples with (R(2)=.89, standard error of the estimate=131.3 mL/min) and without (R(2)=.85, standard error of the estimate=126.8 mL/min) multiple sclerosis (MS) based on the linear regression analyses. Based on the 95% confidence limits for worst-case errors, the equation predicted VO(2peak) within 10% of its true value in 95 of every 100 subjects with MS. Peak VO(2) can be accurately predicted in persons with RRMS who have minimal disability as it is in controls by using established equations and WR(peak) recorded from a maximal, incremental exercise test on a cycle ergometer. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Prediction of Dementia in Primary Care Patients
Jessen, Frank; Wiese, Birgitt; Bickel, Horst; Eiffländer-Gorfer, Sandra; Fuchs, Angela; Kaduszkiewicz, Hanna; Köhler, Mirjam; Luck, Tobias; Mösch, Edelgard; Pentzek, Michael; Riedel-Heller, Steffi G.; Wagner, Michael; Weyerer, Siegfried; Maier, Wolfgang; van den Bussche, Hendrik
2011-01-01
Background Current approaches for AD prediction are based on biomarkers, which are however of restricted availability in primary care. AD prediction tools for primary care are therefore needed. We present a prediction score based on information that can be obtained in the primary care setting. Methodology/Principal Findings We performed a longitudinal cohort study in 3.055 non-demented individuals above 75 years recruited via primary care chart registries (Study on Aging, Cognition and Dementia, AgeCoDe). After the baseline investigation we performed three follow-up investigations at 18 months intervals with incident dementia as the primary outcome. The best set of predictors was extracted from the baseline variables in one randomly selected half of the sample. This set included age, subjective memory impairment, performance on delayed verbal recall and verbal fluency, on the Mini-Mental-State-Examination, and on an instrumental activities of daily living scale. These variables were aggregated to a prediction score, which achieved a prediction accuracy of 0.84 for AD. The score was applied to the second half of the sample (test cohort). Here, the prediction accuracy was 0.79. With a cut-off of at least 80% sensitivity in the first cohort, 79.6% sensitivity, 66.4% specificity, 14.7% positive predictive value (PPV) and 97.8% negative predictive value of (NPV) for AD were achieved in the test cohort. At a cut-off for a high risk population (5% of individuals with the highest risk score in the first cohort) the PPV for AD was 39.1% (52% for any dementia) in the test cohort. Conclusions The prediction score has useful prediction accuracy. It can define individuals (1) sensitively for low cost-low risk interventions, or (2) more specific and with increased PPV for measures of prevention with greater costs or risks. As it is independent of technical aids, it may be used within large scale prevention programs. PMID:21364746
Prediction of dementia in primary care patients.
Jessen, Frank; Wiese, Birgitt; Bickel, Horst; Eiffländer-Gorfer, Sandra; Fuchs, Angela; Kaduszkiewicz, Hanna; Köhler, Mirjam; Luck, Tobias; Mösch, Edelgard; Pentzek, Michael; Riedel-Heller, Steffi G; Wagner, Michael; Weyerer, Siegfried; Maier, Wolfgang; van den Bussche, Hendrik
2011-02-18
Current approaches for AD prediction are based on biomarkers, which are however of restricted availability in primary care. AD prediction tools for primary care are therefore needed. We present a prediction score based on information that can be obtained in the primary care setting. We performed a longitudinal cohort study in 3.055 non-demented individuals above 75 years recruited via primary care chart registries (Study on Aging, Cognition and Dementia, AgeCoDe). After the baseline investigation we performed three follow-up investigations at 18 months intervals with incident dementia as the primary outcome. The best set of predictors was extracted from the baseline variables in one randomly selected half of the sample. This set included age, subjective memory impairment, performance on delayed verbal recall and verbal fluency, on the Mini-Mental-State-Examination, and on an instrumental activities of daily living scale. These variables were aggregated to a prediction score, which achieved a prediction accuracy of 0.84 for AD. The score was applied to the second half of the sample (test cohort). Here, the prediction accuracy was 0.79. With a cut-off of at least 80% sensitivity in the first cohort, 79.6% sensitivity, 66.4% specificity, 14.7% positive predictive value (PPV) and 97.8% negative predictive value of (NPV) for AD were achieved in the test cohort. At a cut-off for a high risk population (5% of individuals with the highest risk score in the first cohort) the PPV for AD was 39.1% (52% for any dementia) in the test cohort. The prediction score has useful prediction accuracy. It can define individuals (1) sensitively for low cost-low risk interventions, or (2) more specific and with increased PPV for measures of prevention with greater costs or risks. As it is independent of technical aids, it may be used within large scale prevention programs.
Sampath, Sivananthan; Tkachenko, Pavlo; Renard, Eric; Pereverzev, Sergei V
2016-11-01
Despite the risk associated with nocturnal hypoglycemia (NH) there are only a few methods aiming at the prediction of such events based on intermittent blood glucose monitoring data. One of the first methods that potentially can be used for NH prediction is based on the low blood glucose index (LBGI) and suggested, for example, in Accu-Chek® Connect as a hypoglycemia risk indicator. On the other hand, nowadays there are other glucose control indices (GCI), which could be used for NH prediction in the same spirit as LBGI. In the present study we propose a general approach of combining NH predictors constructed from different GCI. The approach is based on a recently developed strategy for aggregating ranking algorithms in machine learning. NH predictors have been calibrated and tested on data extracted from clinical trials, performed in EU FP7-funded project DIAdvisor. Then, to show a portability of the method we have tested it on another dataset that was received from EU Horizon 2020-funded project AMMODIT. We exemplify the proposed approach by aggregating NH predictors that have been constructed based on 4 GCI associated with hypoglycemia. Even though these predictors have been preliminary optimized to exhibit better performance on the considered dataset, our aggregation approach allows a further performance improvement. On the dataset, where a portability of the proposed approach has been demonstrated, the aggregating predictor has exhibited the following performance: sensitivity 77%, specificity 83.4%, positive predictive value 80.2%, negative predictive value 80.6%, which is higher than conventionally considered as acceptable. The proposed approach shows potential to be used in telemedicine systems for NH prediction. © 2016 Diabetes Technology Society.
Pyo, Sujin; Lee, Jaewook; Cha, Mincheol; Jang, Huisu
2017-01-01
The prediction of the trends of stocks and index prices is one of the important issues to market participants. Investors have set trading or fiscal strategies based on the trends, and considerable research in various academic fields has been studied to forecast financial markets. This study predicts the trends of the Korea Composite Stock Price Index 200 (KOSPI 200) prices using nonparametric machine learning models: artificial neural network, support vector machines with polynomial and radial basis function kernels. In addition, this study states controversial issues and tests hypotheses about the issues. Accordingly, our results are inconsistent with those of the precedent research, which are generally considered to have high prediction performance. Moreover, Google Trends proved that they are not effective factors in predicting the KOSPI 200 index prices in our frameworks. Furthermore, the ensemble methods did not improve the accuracy of the prediction.
Pyo, Sujin; Lee, Jaewook; Cha, Mincheol
2017-01-01
The prediction of the trends of stocks and index prices is one of the important issues to market participants. Investors have set trading or fiscal strategies based on the trends, and considerable research in various academic fields has been studied to forecast financial markets. This study predicts the trends of the Korea Composite Stock Price Index 200 (KOSPI 200) prices using nonparametric machine learning models: artificial neural network, support vector machines with polynomial and radial basis function kernels. In addition, this study states controversial issues and tests hypotheses about the issues. Accordingly, our results are inconsistent with those of the precedent research, which are generally considered to have high prediction performance. Moreover, Google Trends proved that they are not effective factors in predicting the KOSPI 200 index prices in our frameworks. Furthermore, the ensemble methods did not improve the accuracy of the prediction. PMID:29136004
Patterson, Emma; Quetel, Anna-Karin; Lilja, Karin; Simma, Marit; Olsson, Linnea; Elinder, Liselotte Schäfer
2013-06-01
To develop a feasible, valid, reliable web-based instrument to objectively evaluate school meal quality in Swedish primary schools. The construct 'school meal quality' was operationalized by an expert panel into six domains, one of which was nutritional quality. An instrument was drafted and pilot-tested. Face validity was evaluated by the panel. Feasibility was established via a large national study. Food-based criteria to predict the nutritional adequacy of school meals in terms of fat quality, iron, vitamin D and fibre content were developed. Predictive validity was evaluated by comparing the nutritional adequacy of school menus based on these criteria with the results from a nutritional analysis. Inter-rater reliability was also assessed. The instrument was developed between 2010 and 2012. It is designed for use in all primary schools by school catering and/or management representatives. A pilot-test of eighty schools in Stockholm (autumn 2010) and a further test of feasibility in 191 schools nationally (spring 2011). The four nutrient-specific food-based criteria predicted nutritional adequacy with sensitivity ranging from 0.85 to 1.0, specificity from 0.45 to 1.0 and accuracy from 0.67 to 1.0. The sample in the national study was statistically representative and the majority of users rated the questionnaire positively, suggesting the instrument is feasible. The inter-rater reliability was fair to almost perfect for continuous variables and agreement was ≥ 67 % for categorical variables. An innovative web-based system to comprehensively monitor school meal quality across several domains, with validated questions in the nutritional domain, is available in Sweden for the first time.
Changes in Predictive Task Switching with Age and with Cognitive Load.
Levy-Tzedek, Shelly
2017-01-01
Predictive control of movement is more efficient than feedback-based control, and is an important skill in everyday life. We tested whether the ability to predictively control movements of the upper arm is affected by age and by cognitive load. A total of 63 participants were tested in two experiments. In both experiments participants were seated, and controlled a cursor on a computer screen by flexing and extending their dominant arm. In Experiment 1, 20 young adults and 20 older adults were asked to continuously change the frequency of their horizontal arm movements, with the goal of inducing an abrupt switch between discrete movements (at low frequencies) and rhythmic movements (at high frequencies). We tested whether that change was performed based on a feed-forward (predictive) or on a feedback (reactive) control. In Experiment 2, 23 young adults performed the same task, while being exposed to a cognitive load half of the time via a serial subtraction task. We found that both aging and cognitive load diminished, on average, the ability of participants to predictively control their movements. Five older adults and one young adult under a cognitive load were not able to perform the switch between rhythmic and discrete movement (or vice versa). In Experiment 1, 40% of the older participants were able to predictively control their movements, compared with 70% in the young group. In Experiment 2, 48% of the participants were able to predictively control their movements with a cognitively loading task, compared with 70% in the no-load condition. The ability to predictively change a motor plan in anticipation of upcoming changes may be an important component in performing everyday functions, such as safe driving and avoiding falls.
NASA Technical Reports Server (NTRS)
Kemmerly, Guy T.
1990-01-01
A moving-model ground-effect testing method was used to study the influence of rate-of-descent on the aerodynamic characteristics for the F-15 STOL and Maneuver Technology Demonstrator (S/MTD) configuration for both the approach and roll-out phases of landing. The approach phase was modeled for three rates of descent, and the results were compared to the predictions from the F-15 S/MTD simulation data base (prediction based on data obtained in a wind tunnel with zero rate of descent). This comparison showed significant differences due both to the rate of descent in the moving-model test and to the presence of the ground boundary layer in the wind tunnel test. Relative to the simulation data base predictions, the moving-model test showed substantially less lift increase in ground effect, less nose-down pitching moment, and less increase in drag. These differences became more prominent at the larger thrust vector angles. Over the small range of rates of descent tested using the moving-model technique, the effect of rate of descent on longitudinal aerodynamics was relatively constant. The results of this investigation indicate no safety-of-flight problems with the lower jets vectored up to 80 deg on approach. The results also indicate that this configuration could employ a nozzle concept using lower reverser vector angles up to 110 deg on approach if a no-flare approach procedure were adopted and if inlet reingestion does not pose a problem.
Wilde, Alex; Meiser, Bettina; Mitchell, Philip B; Schofield, Peter R
2010-01-01
The past decade has seen rapid advances in the identification of associations between candidate genes and a range of common multifactorial disorders. This paper evaluates public attitudes towards the complexity of genetic risk prediction in psychiatry involving susceptibility genes, uncertain penetrance and gene-environment interactions on which successful molecular-based mental health interventions will depend. A qualitative approach was taken to enable the exploration of the views of the public. Four structured focus groups were conducted with a total of 36 participants. The majority of participants indicated interest in having a genetic test for susceptibility to major depression, if it was available. Having a family history of mental illness was cited as a major reason. After discussion of perceived positive and negative implications of predictive genetic testing, nine of 24 participants initially interested in having such a test changed their mind. Fear of genetic discrimination and privacy issues predominantly influenced change of attitude. All participants still interested in having a predictive genetic test for risk for depression reported they would only do so through trusted medical professionals. Participants were unanimously against direct-to-consumer genetic testing marketed through the Internet, although some would consider it if there was suitable protection against discrimination. The study highlights the importance of general practitioner and public education about psychiatric genetics, and the availability of appropriate treatment and support services prior to implementation of future predictive genetic testing services.
Hovanitz, Christine A; Thatcher, Dawn Lindsay
2012-03-01
Academic work as well as compensated employment has been found adversely associated with frequent headache; headache remains a costly disorder to the person and to society. However, little is known of factors--other than prior headache complaints--that may predict headache frequency over extended periods of time. Based on previous research, effortful task engagement appears to be a contributing factor to headache onset. This suggests that relatively stable attributes that are likely to affect effort expenditure may predict headache frequency over long intervals. The goal of this study was to evaluate the predictability of headache proneness in college-attending students by college aptitude tests administered in high school. Five hundred undergraduate students enrolled in a large public, urban university completed a number of questionnaires. Official admissions records of the college aptitude tests ACT (an acronym for the original test name, the American College Testing), SAT (the Scholastic Aptitude Test), and GPA (grade point average) were obtained and compared to the report of headache frequency. The ACT test mathematics predicted headache proneness in the hypothesized direction, while the ACT English test provided conflicting data; some evidence of gender differences was suggested. While nearly all research on headache and work effectiveness has considered headache to be a cause of reduced efficiency or productivity, this study suggests that a factor which presumably affects the ease of work completion (e.g., scholastic aptitude) may predict headache, at least in some cases within the "work" environment of academia.
Nondestructive test determines overload destruction characteristics of current limiter fuses
NASA Technical Reports Server (NTRS)
Swartz, G. A.
1968-01-01
Nondestructive test predicts the time required for current limiters to blow /open the circuit/ when subjected to a given overload. The test method is based on an empirical relationship between the voltage rise across a current limiter for a fixed time interval and the time to blow.
Confidence Testing for Knowledge-Based Global Communities
ERIC Educational Resources Information Center
Jack, Brady Michael; Liu, Chia-Ju; Chiu, Houn-Lin; Shymansky, James A.
2009-01-01
This proposal advocates the position that the use of confidence wagering (CW) during testing can predict the accuracy of a student's test answer selection during between-subject assessments. Data revealed female students were more favorable to taking risks when making CW and less inclined toward risk aversion than their male counterparts. Student…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Purdy, R.
A hierarchical model consisting of quantitative structure-activity relationships based mainly on chemical reactivity was developed to predict the carcinogenicity of organic chemicals to rodents. The model is comprised of quantitative structure-activity relationships, QSARs based on hypothesized mechanisms of action, metabolism, and partitioning. Predictors included octanol/water partition coefficient, molecular size, atomic partial charge, bond angle strain, atomic acceptor delocalizibility, atomic radical superdelocalizibility, the lowest unoccupied molecular orbital (LUMO) energy of hypothesized intermediate nitrenium ion of primary aromatic amines, difference in charge of ionized and unionized carbon-chlorine bonds, substituent size and pattern on polynuclear aromatic hydrocarbons, the distance between lone electron pairsmore » over a rigid structure, and the presence of functionalities such as nitroso and hydrazine. The model correctly classified 96% of the carcinogens in the training set of 306 chemicals, and 90% of the carcinogens in the test set of 301 chemicals. The test set by chance contained 84% of the positive thiocontaining chemicals. A QSAR for these chemicals was developed. This posttest set modified model correctly predicted 94% of the carcinogens in the test set. This model was used to predict the carcinogenicity of the 25 organic chemicals the U.S. National Toxicology Program was testing at the writing of this article. 12 refs., 3 tabs.« less
Researches of fruit quality prediction model based on near infrared spectrum
NASA Astrophysics Data System (ADS)
Shen, Yulin; Li, Lian
2018-04-01
With the improvement in standards for food quality and safety, people pay more attention to the internal quality of fruits, therefore the measurement of fruit internal quality is increasingly imperative. In general, nondestructive soluble solid content (SSC) and total acid content (TAC) analysis of fruits is vital and effective for quality measurement in global fresh produce markets, so in this paper, we aim at establishing a novel fruit internal quality prediction model based on SSC and TAC for Near Infrared Spectrum. Firstly, the model of fruit quality prediction based on PCA + BP neural network, PCA + GRNN network, PCA + BP adaboost strong classifier, PCA + ELM and PCA + LS_SVM classifier are designed and implemented respectively; then, in the NSCT domain, the median filter and the SavitzkyGolay filter are used to preprocess the spectral signal, Kennard-Stone algorithm is used to automatically select the training samples and test samples; thirdly, we achieve the optimal models by comparing 15 kinds of prediction model based on the theory of multi-classifier competition mechanism, specifically, the non-parametric estimation is introduced to measure the effectiveness of proposed model, the reliability and variance of nonparametric estimation evaluation of each prediction model to evaluate the prediction result, while the estimated value and confidence interval regard as a reference, the experimental results demonstrate that this model can better achieve the optimal evaluation of the internal quality of fruit; finally, we employ cat swarm optimization to optimize two optimal models above obtained from nonparametric estimation, empirical testing indicates that the proposed method can provide more accurate and effective results than other forecasting methods.
Priming of Spatial Distance Enhances Children's Creative Performance
ERIC Educational Resources Information Center
Liberman, Nira; Polack, Orli; Hameiri, Boaz; Blumenfeld, Maayan
2012-01-01
According to construal level theory, psychological distance promotes more abstract thought. Theories of creativity, in turn, suggest that abstract thought promotes creativity. Based on these lines of theorizing, we predicted that spatial distancing would enhance creative performance in elementary school children. To test this prediction, we primed…
Literature-based condition-specific miRNA-mRNA target prediction.
Oh, Minsik; Rhee, Sungmin; Moon, Ji Hwan; Chae, Heejoon; Lee, Sunwon; Kang, Jaewoo; Kim, Sun
2017-01-01
miRNAs are small non-coding RNAs that regulate gene expression by binding to the 3'-UTR of genes. Many recent studies have reported that miRNAs play important biological roles by regulating specific mRNAs or genes. Many sequence-based target prediction algorithms have been developed to predict miRNA targets. However, these methods are not designed for condition-specific target predictions and produce many false positives; thus, expression-based target prediction algorithms have been developed for condition-specific target predictions. A typical strategy to utilize expression data is to leverage the negative control roles of miRNAs on genes. To control false positives, a stringent cutoff value is typically set, but in this case, these methods tend to reject many true target relationships, i.e., false negatives. To overcome these limitations, additional information should be utilized. The literature is probably the best resource that we can utilize. Recent literature mining systems compile millions of articles with experiments designed for specific biological questions, and the systems provide a function to search for specific information. To utilize the literature information, we used a literature mining system, BEST, that automatically extracts information from the literature in PubMed and that allows the user to perform searches of the literature with any English words. By integrating omics data analysis methods and BEST, we developed Context-MMIA, a miRNA-mRNA target prediction method that combines expression data analysis results and the literature information extracted based on the user-specified context. In the pathway enrichment analysis using genes included in the top 200 miRNA-targets, Context-MMIA outperformed the four existing target prediction methods that we tested. In another test on whether prediction methods can re-produce experimentally validated target relationships, Context-MMIA outperformed the four existing target prediction methods. In summary, Context-MMIA allows the user to specify a context of the experimental data to predict miRNA targets, and we believe that Context-MMIA is very useful for predicting condition-specific miRNA targets.
Zenilman, J M; Miller, W C; Gaydos, C; Rogers, S M; Turner, C F
2003-04-01
Nucleic acid amplification tests have facilitated field based STD studies and increased screening activities. However, even with highly specific tests, the positive predictive value (PPV) of such tests may be lower than desirable in low prevalence populations. We estimated PPVs for a single LCR test in a population survey in which positive specimens were retested. The Baltimore STD and Behavior Survey (BSBS) was a population based behavioural survey of adults which included collecting urine specimens to assess the prevalence of gonorrhoea and chlamydial infection. Gonorrhoea and chlamydial infection were diagnosed by ligase chain reaction (LCR). Nearly all positive results were retested by LCR. Because of cost considerations, negative results were not confirmed. Predicted curves for the PPV were calculated for a single testing assuming an LCR test sensitivity of 95%, and test specificities in the range 95.0%-99.9%, for disease prevalences between 1% and 10%. Positive specimens were retested to derive empirical estimates of the PPV of a positive result on a single LCR test. 579 participants age 18-35 provided urine specimens. 20 (3.5%) subjects initially tested positive for chlamydial infection, and 39 (6.7%) tested positive for gonococcal infection. If positive results on the repeat LCR are taken as confirmation of a "true" infection, the observed PPV for the first LCR testing was 89.5% for chlamydial infection and 83.3% for gonorrhoea. This is within the range of theoretical PPVs calculated from the assumed sensitivities and specificities of the LCR assays. Empirical performance of a single LCR testing approximated the theoretically predicted PPV in this field study. This result demonstrates the need to take account of the lower PPVs obtained when such tests are used in field studies or clinical screening of low prevalence populations. Repeat testing of specimens, preferably with a different assay (for example, polymerase chain reaction), and disclosure of the non-trivial potential for false positive test results would seem appropriate in all such studies.
Kazaura, Kamugisha; Omae, Kazunori; Suzuki, Toshiji; Matsumoto, Mitsuji; Mutafungwa, Edward; Korhonen, Timo O; Murakami, Tadaaki; Takahashi, Koichi; Matsumoto, Hideki; Wakamori, Kazuhiko; Arimoto, Yoshinori
2006-06-12
The deterioration and deformation of a free-space optical beam wave-front as it propagates through the atmosphere can reduce the link availability and may introduce burst errors thus degrading the performance of the system. We investigate the suitability of utilizing soft-computing (SC) based tools for improving performance of free-space optical (FSO) communications systems. The SC based tools are used for the prediction of key parameters of a FSO communications system. Measured data collected from an experimental FSO communication system is used as training and testing data for a proposed multi-layer neural network predictor (MNNP) used to predict future parameter values. The predicted parameters are essential for reducing transmission errors by improving the antenna's accuracy of tracking data beams. This is particularly essential for periods considered to be of strong atmospheric turbulence. The parameter values predicted using the proposed tool show acceptable conformity with original measurements.
Thin-slice vision: inference of confidence measure from perceptual video quality
NASA Astrophysics Data System (ADS)
Hameed, Abdul; Balas, Benjamin; Dai, Rui
2016-11-01
There has been considerable research on thin-slice judgments, but no study has demonstrated the predictive validity of confidence measures when assessors watch videos acquired from communication systems, in which the perceptual quality of videos could be degraded by limited bandwidth and unreliable network conditions. This paper studies the relationship between high-level thin-slice judgments of human behavior and factors that contribute to perceptual video quality. Based on a large number of subjective test results, it has been found that the confidence of a single individual present in all the videos, called speaker's confidence (SC), could be predicted by a list of features that contribute to perceptual video quality. Two prediction models, one based on artificial neural network and the other based on a decision tree, were built to predict SC. Experimental results have shown that both prediction models can result in high correlation measures.
NASA Astrophysics Data System (ADS)
Liang, Yunyun; Liu, Sanyang; Zhang, Shengli
2017-02-01
Apoptosis is a fundamental process controlling normal tissue homeostasis by regulating a balance between cell proliferation and death. Predicting subcellular location of apoptosis proteins is very helpful for understanding its mechanism of programmed cell death. Prediction of apoptosis protein subcellular location is still a challenging and complicated task, and existing methods mainly based on protein primary sequences. In this paper, we propose a new position-specific scoring matrix (PSSM)-based model by using Geary autocorrelation function and detrended cross-correlation coefficient (DCCA coefficient). Then a 270-dimensional (270D) feature vector is constructed on three widely used datasets: ZD98, ZW225 and CL317, and support vector machine is adopted as classifier. The overall prediction accuracies are significantly improved by rigorous jackknife test. The results show that our model offers a reliable and effective PSSM-based tool for prediction of apoptosis protein subcellular localization.
NASA Technical Reports Server (NTRS)
Thomas, V. C.
1986-01-01
A Vibroacoustic Data Base Management Center has been established at the Jet Propulsion Laboratory (JPL). The center utilizes the Vibroacoustic Payload Environment Prediction System (VAPEPS) software package to manage a data base of shuttle and expendable launch vehicle flight and ground test data. Remote terminal access over telephone lines to a dedicated VAPEPS computer system has been established to provide the payload community a convenient means of querying the global VAPEPS data base. This guide describes the functions of the JPL Data Base Management Center and contains instructions for utilizing the resources of the center.
Paul, Keryn I; Roxburgh, Stephen H; Chave, Jerome; England, Jacqueline R; Zerihun, Ayalsew; Specht, Alison; Lewis, Tom; Bennett, Lauren T; Baker, Thomas G; Adams, Mark A; Huxtable, Dan; Montagu, Kelvin D; Falster, Daniel S; Feller, Mike; Sochacki, Stan; Ritson, Peter; Bastin, Gary; Bartle, John; Wildy, Dan; Hobbs, Trevor; Larmour, John; Waterworth, Rob; Stewart, Hugh T L; Jonson, Justin; Forrester, David I; Applegate, Grahame; Mendham, Daniel; Bradford, Matt; O'Grady, Anthony; Green, Daryl; Sudmeyer, Rob; Rance, Stan J; Turner, John; Barton, Craig; Wenk, Elizabeth H; Grove, Tim; Attiwill, Peter M; Pinkard, Elizabeth; Butler, Don; Brooksbank, Kim; Spencer, Beren; Snowdon, Peter; O'Brien, Nick; Battaglia, Michael; Cameron, David M; Hamilton, Steve; McAuthur, Geoff; Sinclair, Jenny
2016-06-01
Accurate ground-based estimation of the carbon stored in terrestrial ecosystems is critical to quantifying the global carbon budget. Allometric models provide cost-effective methods for biomass prediction. But do such models vary with ecoregion or plant functional type? We compiled 15 054 measurements of individual tree or shrub biomass from across Australia to examine the generality of allometric models for above-ground biomass prediction. This provided a robust case study because Australia includes ecoregions ranging from arid shrublands to tropical rainforests, and has a rich history of biomass research, particularly in planted forests. Regardless of ecoregion, for five broad categories of plant functional type (shrubs; multistemmed trees; trees of the genus Eucalyptus and closely related genera; other trees of high wood density; and other trees of low wood density), relationships between biomass and stem diameter were generic. Simple power-law models explained 84-95% of the variation in biomass, with little improvement in model performance when other plant variables (height, bole wood density), or site characteristics (climate, age, management) were included. Predictions of stand-based biomass from allometric models of varying levels of generalization (species-specific, plant functional type) were validated using whole-plot harvest data from 17 contrasting stands (range: 9-356 Mg ha(-1) ). Losses in efficiency of prediction were <1% if generalized models were used in place of species-specific models. Furthermore, application of generalized multispecies models did not introduce significant bias in biomass prediction in 92% of the 53 species tested. Further, overall efficiency of stand-level biomass prediction was 99%, with a mean absolute prediction error of only 13%. Hence, for cost-effective prediction of biomass across a wide range of stands, we recommend use of generic allometric models based on plant functional types. Development of new species-specific models is only warranted when gains in accuracy of stand-based predictions are relatively high (e.g. high-value monocultures). © 2015 John Wiley & Sons Ltd.
Evaluation of procedures for prediction of unconventional gas in the presence of geologic trends
Attanasi, E.D.; Coburn, T.C.
2009-01-01
This study extends the application of local spatial nonparametric prediction models to the estimation of recoverable gas volumes in continuous-type gas plays to regimes where there is a single geologic trend. A transformation is presented, originally proposed by Tomczak, that offsets the distortions caused by the trend. This article reports on numerical experiments that compare predictive and classification performance of the local nonparametric prediction models based on the transformation with models based on Euclidean distance. The transformation offers improvement in average root mean square error when the trend is not severely misspecified. Because of the local nature of the models, even those based on Euclidean distance in the presence of trends are reasonably robust. The tests based on other model performance metrics such as prediction error associated with the high-grade tracts and the ability of the models to identify sites with the largest gas volumes also demonstrate the robustness of both local modeling approaches. ?? International Association for Mathematical Geology 2009.
Lau, Brian C; Collins, Michael W; Lovell, Mark R
2011-06-01
Concussions affect an estimated 136 000 high school athletes yearly. Computerized neurocognitive testing has been shown to be appropriately sensitive and specific in diagnosing concussions, but no studies have assessed its utility to predict length of recovery. Determining prognosis during subacute recovery after sports concussion will help clinicians more confidently address return-to-play and academic decisions. To quantify the prognostic ability of computerized neurocognitive testing in combination with symptoms during the subacute recovery phase from sports-related concussion. Cohort study (prognosis); Level of evidence, 2. In sum, 108 male high school football athletes completed a computer-based neurocognitive test battery within 2.23 days of injury and were followed until returned to play as set by international guidelines. Athletes were grouped into protracted recovery (>14 days; n = 50) or short-recovery (≤14 days; n = 58). Separate discriminant function analyses were performed using total symptom score on Post-Concussion Symptom Scale, symptom clusters (migraine, cognitive, sleep, neuropsychiatric), and Immediate Postconcussion Assessment and Cognitive Testing neurocognitive scores (verbal memory, visual memory, reaction time, processing speed). Multiple discriminant function analyses revealed that the combination of 4 symptom clusters and 4 neurocognitive composite scores had the highest sensitivity (65.22%), specificity (80.36%), positive predictive value (73.17%), and negative predictive value (73.80%) in predicting protracted recovery. Discriminant function analyses of total symptoms on the Post-Concussion Symptom Scale alone had a sensitivity of 40.81%; specificity, 79.31%; positive predictive value, 62.50%; and negative predictive value, 61.33%. The 4 symptom clusters alone discriminant function analyses had a sensitivity of 46.94%; specificity, 77.20%; positive predictive value, 63.90%; and negative predictive value, 62.86%. Discriminant function analyses of the 4 computerized neurocognitive scores alone had a sensitivity of 53.20%; specificity, 75.44%; positive predictive value, 64.10%; and negative predictive value, 66.15%. The use of computerized neurocognitive testing in conjunction with symptom clusters results improves sensitivity, specificity, positive predictive value, and negative predictive value of predicting protracted recovery compared with each used alone. There is also a net increase in sensitivity of 24.41% when using neurocognitive testing and symptom clusters together compared with using total symptoms on Post-Concussion Symptom Scale alone.
QSAR Classification Model for Antibacterial Compounds and Its Use in Virtual Screening
2012-09-26
test set molecules that were not used to train the models . This allowed us to more accurately estimate the prediction power of the models . As...pathogens and deposited in PubChem Bioassays. Ultimately, the main purpose of this model is to make predictions , based on known antibacterial and non...the model built form the remaining compounds is used to predict the left out compound. Once all the compounds pass through this cycle of prediction , a
Settivari, Raja S; Ball, Nicholas; Murphy, Lynea; Rasoulpour, Reza; Boverhof, Darrell R; Carney, Edward W
2015-03-01
Interest in applying 21st-century toxicity testing tools for safety assessment of industrial chemicals is growing. Whereas conventional toxicology uses mainly animal-based, descriptive methods, a paradigm shift is emerging in which computational approaches, systems biology, high-throughput in vitro toxicity assays, and high-throughput exposure assessments are beginning to be applied to mechanism-based risk assessments in a time- and resource-efficient fashion. Here we describe recent advances in predictive safety assessment, with a focus on their strategic application to meet the changing demands of the chemical industry and its stakeholders. The opportunities to apply these new approaches is extensive and include screening of new chemicals, informing the design of safer and more sustainable chemical alternatives, filling information gaps on data-poor chemicals already in commerce, strengthening read-across methodology for categories of chemicals sharing similar modes of action, and optimizing the design of reduced-risk product formulations. Finally, we discuss how these predictive approaches dovetail with in vivo integrated testing strategies within repeated-dose regulatory toxicity studies, which are in line with 3Rs principles to refine, reduce, and replace animal testing. Strategic application of these tools is the foundation for informed and efficient safety assessment testing strategies that can be applied at all stages of the product-development process.
New closed-form approximation for skin chromophore mapping.
Välisuo, Petri; Kaartinen, Ilkka; Tuchin, Valery; Alander, Jarmo
2011-04-01
The concentrations of blood and melanin in skin can be estimated based on the reflectance of light. Many models for this estimation have been built, such as Monte Carlo simulation, diffusion models, and the differential modified Beer-Lambert law. The optimization-based methods are too slow for chromophore mapping of high-resolution spectral images, and the differential modified Beer-Lambert is not often accurate enough. Optimal coefficients for the differential Beer-Lambert model are calculated by differentiating the diffusion model, optimized to the normal skin spectrum. The derivatives are then used in predicting the difference in chromophore concentrations from the difference in absorption spectra. The accuracy of the method is tested both computationally and experimentally using a Monte Carlo multilayer simulation model, and the data are measured from the palm of a hand during an Allen's test, which modulates the blood content of skin. The correlations of the given and predicted blood, melanin, and oxygen saturation levels are correspondingly r = 0.94, r = 0.99, and r = 0.73. The prediction of the concentrations for all pixels in a 1-megapixel image would take ∼ 20 min, which is orders of magnitude faster than the methods based on optimization during the prediction.
Towards psychologically adaptive brain-computer interfaces
NASA Astrophysics Data System (ADS)
Myrden, A.; Chau, T.
2016-12-01
Objective. Brain-computer interface (BCI) performance is sensitive to short-term changes in psychological states such as fatigue, frustration, and attention. This paper explores the design of a BCI that can adapt to these short-term changes. Approach. Eleven able-bodied individuals participated in a study during which they used a mental task-based EEG-BCI to play a simple maze navigation game while self-reporting their perceived levels of fatigue, frustration, and attention. In an offline analysis, a regression algorithm was trained to predict changes in these states, yielding Pearson correlation coefficients in excess of 0.45 between the self-reported and predicted states. Two means of fusing the resultant mental state predictions with mental task classification were investigated. First, single-trial mental state predictions were used to predict correct classification by the BCI during each trial. Second, an adaptive BCI was designed that retrained a new classifier for each testing sample using only those training samples for which predicted mental state was similar to that predicted for the current testing sample. Main results. Mental state-based prediction of BCI reliability exceeded chance levels. The adaptive BCI exhibited significant, but practically modest, increases in classification accuracy for five of 11 participants and no significant difference for the remaining six despite a smaller average training set size. Significance. Collectively, these findings indicate that adaptation to psychological state may allow the design of more accurate BCIs.
Hippocampus Segmentation Based on Local Linear Mapping
Pang, Shumao; Jiang, Jun; Lu, Zhentai; Li, Xueli; Yang, Wei; Huang, Meiyan; Zhang, Yu; Feng, Yanqiu; Huang, Wenhua; Feng, Qianjin
2017-01-01
We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively. PMID:28368016
Hippocampus Segmentation Based on Local Linear Mapping.
Pang, Shumao; Jiang, Jun; Lu, Zhentai; Li, Xueli; Yang, Wei; Huang, Meiyan; Zhang, Yu; Feng, Yanqiu; Huang, Wenhua; Feng, Qianjin
2017-04-03
We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively.
NASA Astrophysics Data System (ADS)
Farrahi, G. H.; Ghodrati, M.; Azadi, M.; Rezvani Rad, M.
2014-08-01
This article presents the cyclic behavior of the A356.0 aluminum alloy under low-cycle fatigue (or isothermal) and thermo-mechanical fatigue loadings. Since the thermo-mechanical fatigue (TMF) test is time consuming and has high costs in comparison to low-cycle fatigue (LCF) tests, the purpose of this research is to use LCF test results to predict the TMF behavior of the material. A time-independent model, considering the combined nonlinear isotropic/kinematic hardening law, was used to predict the TMF behavior of the material. Material constants of this model were calibrated based on room-temperature and high-temperature low-cycle fatigue tests. The nonlinear isotropic/kinematic hardening law could accurately estimate the stress-strain hysteresis loop for the LCF condition; however, for the out-of-phase TMF, the condition could not predict properly the stress value due to the strain rate effect. Therefore, a two-layer visco-plastic model and also the Johnson-Cook law were applied to improve the estimation of the stress-strain hysteresis loop. Related finite element results based on the two-layer visco-plastic model demonstrated a good agreement with experimental TMF data of the A356.0 alloy.
Hippocampus Segmentation Based on Local Linear Mapping
NASA Astrophysics Data System (ADS)
Pang, Shumao; Jiang, Jun; Lu, Zhentai; Li, Xueli; Yang, Wei; Huang, Meiyan; Zhang, Yu; Feng, Yanqiu; Huang, Wenhua; Feng, Qianjin
2017-04-01
We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively.
A microstructurally based model of solder joints under conditions of thermomechanical fatigue
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frear, D.R.; Burchett, S.N.; Rashid, M.M.
The thermomechanical fatigue failure of solder joints in increasingly becoming an important reliability issue. In this paper we present two computational methodologies that have been developed to predict the behavior of near eutectic Sn-Pb solder joints under fatigue conditions that are based on metallurgical tests as fundamental input for constitutive relations. The two-phase model mathematically predicts the heterogeneous coarsening behavior of near eutectic Sn-Pb solder. The finite element simulations from this model agree well with experimental thermomechanical fatigue tests. The simulations show that the presence of an initial heterogeneity in the solder microstructure could significantly degrade the fatigue lifetime. Themore » single phase model is a computational technique that was developed to predict solder joint behavior using materials data for constitutive relation constants that could be determined through straightforward metallurgical experiments. A shear/torsion test sample was developed to impose strain in two different orientations. Materials constants were derived from these tests and the results showed an adequate fit to experimental results. The single-phase model could be very useful for conditions where microstructural evolution is not a dominant factor in fatigue.« less
Kozma, Bence; Hirsch, Edit; Gergely, Szilveszter; Párta, László; Pataki, Hajnalka; Salgó, András
2017-10-25
In this study, near-infrared (NIR) and Raman spectroscopy were compared in parallel to predict the glucose concentration of Chinese hamster ovary cell cultivations. A shake flask model system was used to quickly generate spectra similar to bioreactor cultivations therefore accelerating the development of a working model prior to actual cultivations. Automated variable selection and several pre-processing methods were tested iteratively during model development using spectra from six shake flask cultivations. The target was to achieve the lowest error of prediction for the glucose concentration in two independent shake flasks. The best model was then used to test the scalability of the two techniques by predicting spectra of a 10l and a 100l scale bioreactor cultivation. The NIR spectroscopy based model could follow the trend of the glucose concentration but it was not sufficiently accurate for bioreactor monitoring. On the other hand, the Raman spectroscopy based model predicted the concentration of glucose in both cultivation scales sufficiently accurately with an error around 4mM (0.72g/l), that is satisfactory for the on-line bioreactor monitoring purposes of the biopharma industry. Therefore, the shake flask model system was proven to be suitable for scalable spectroscopic model development. Copyright © 2017 Elsevier B.V. All rights reserved.
Saline suppression test parameters may predict bilateral subtypes of primary aldosteronism.
Hashimura, Hikaru; Shen, Jimmy; Fuller, Peter J; Chee, Nicholas Y N; Doery, James C G; Chong, Winston; Choy, Kay Weng; Gwini, Stella May; Yang, Jun
2018-06-06
The saline suppression test (SST) serves to confirm the diagnosis of primary aldosteronism (PA) while adrenal vein sampling (AVS) is used to determine whether the aldosterone hypersecretion is unilateral or bilateral. An accurate prediction of bilateral PA based on SST results could reduce the need for AVS. We sought to identify SST parameters that reliably predict bilateral PA. The results from 121 patients undergoing SSTs at Monash Health from January 2010 to January 2018 including screening blood tests, imaging, AVS and histopathology results were evaluated. Patients were subtyped into unilateral or bilateral PA based on AVS and surgical outcomes. Of 113 patients with confirmed PA, 33 had unilateral disease while 42 had bilateral disease. In those with bilateral disease, plasma aldosterone concentration (PAC) was significantly lower post-SST, together with a significant fall in the aldosterone-renin ratio (ARR). The combination of PAC <300 pmol/L and a reduction in ARR post-SST provided 96.8% specificity in predicting bilateral disease. Eighteen out of 39 patients (49%) with bilateral PA could have avoided AVS using these criteria. A combination of PAC <300 pmol/L and a lower ARR post-SST could reliably predict bilateral PA. An independent cohort will be needed to validate these findings. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Data for Room Fire Model Comparisons
Peacock, Richard D.; Davis, Sanford; Babrauskas, Vytenis
1991-01-01
With the development of models to predict fire growth and spread in buildings, there has been a concomitant evolution in the measurement and analysis of experimental data in real-scale fires. This report presents the types of analyses that can be used to examine large-scale room fire test data to prepare the data for comparison with zone-based fire models. Five sets of experimental data which can be used to test the limits of a typical two-zone fire model are detailed. A standard set of nomenclature describing the geometry of the building and the quantities measured in each experiment is presented. Availability of ancillary data (such as smaller-scale test results) is included. These descriptions, along with the data (available in computer-readable form) should allow comparisons between the experiment and model predictions. The base of experimental data ranges in complexity from one room tests with individual furniture items to a series of tests conducted in a multiple story hotel equipped with a zoned smoke control system. PMID:28184121
Data for Room Fire Model Comparisons.
Peacock, Richard D; Davis, Sanford; Babrauskas, Vytenis
1991-01-01
With the development of models to predict fire growth and spread in buildings, there has been a concomitant evolution in the measurement and analysis of experimental data in real-scale fires. This report presents the types of analyses that can be used to examine large-scale room fire test data to prepare the data for comparison with zone-based fire models. Five sets of experimental data which can be used to test the limits of a typical two-zone fire model are detailed. A standard set of nomenclature describing the geometry of the building and the quantities measured in each experiment is presented. Availability of ancillary data (such as smaller-scale test results) is included. These descriptions, along with the data (available in computer-readable form) should allow comparisons between the experiment and model predictions. The base of experimental data ranges in complexity from one room tests with individual furniture items to a series of tests conducted in a multiple story hotel equipped with a zoned smoke control system.
Creep-fatigue life prediction for engine hot section materials (isotropic)
NASA Technical Reports Server (NTRS)
Moreno, V.
1982-01-01
The objectives of this program are the investigation of fundamental approaches to high temperature crack initiation life prediction, identification of specific modeling strategies and the development of specific models for component relevant loading conditions. A survey of the hot section material/coating systems used throughout the gas turbine industry is included. Two material/coating systems will be identified for the program. The material/coating system designated as the base system shall be used throughout Tasks 1-12. The alternate material/coating system will be used only in Task 12 for further evaluation of the models developed on the base material. In Task II, candidate life prediction approaches will be screened based on a set of criteria that includes experience of the approaches within the literature, correlation with isothermal data generated on the base material, and judgements relative to the applicability of the approach for the complex cycles to be considered in the option program. The two most promising approaches will be identified. Task 3 further evaluates the best approach using additional base material fatigue testing including verification tests. Task 4 consists of technical, schedular, financial and all other reporting requirements in accordance with the Reports of Work clause.
SU-F-R-51: Radiomics in CT Perfusion Maps of Head and Neck Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nesteruk, M; Riesterer, O; Veit-Haibach, P
2016-06-15
Purpose: The aim of this study was to test the predictive value of radiomics features of CT perfusion (CTP) for tumor control, based on a preselection of radiomics features in a robustness study. Methods: 11 patients with head and neck cancer (HNC) and 11 patients with lung cancer were included in the robustness study to preselect stable radiomics parameters. Data from 36 HNC patients treated with definitive radiochemotherapy (median follow-up 30 months) was used to build a predictive model based on these parameters. All patients underwent pre-treatment CTP. 315 texture parameters were computed for three perfusion maps: blood volume, bloodmore » flow and mean transit time. The variability of texture parameters was tested with respect to non-standardizable perfusion computation factors (noise level and artery contouring) using intraclass correlation coefficients (ICC). The parameter with the highest ICC in the correlated group of parameters (inter-parameter Spearman correlations) was tested for its predictive value. The final model to predict tumor control was built using multivariate Cox regression analysis with backward selection of the variables. For comparison, a predictive model based on tumor volume was created. Results: Ten parameters were found to be stable in both HNC and lung cancer regarding potentially non-standardizable factors after the correction for inter-parameter correlations. In the multivariate backward selection of the variables, blood flow entropy showed a highly significant impact on tumor control (p=0.03) with concordance index (CI) of 0.76. Blood flow entropy was significantly lower in the patient group with controlled tumors at 18 months (p<0.1). The new model showed a higher concordance index compared to the tumor volume model (CI=0.68). Conclusion: The preselection of variables in the robustness study allowed building a predictive radiomics-based model of tumor control in HNC despite a small patient cohort. This model was found to be superior to the volume-based model. The project was supported by the KFSP Tumor Oxygenation of the University of Zurich, by a grant of the Center for Clinical Research, University and University Hospital Zurich and by a research grant from Merck (Schweiz) AG.« less