Sample records for validate analytical predictions

  1. Validation of biomarkers to predict response to immunotherapy in cancer: Volume I - pre-analytical and analytical validation.

    PubMed

    Masucci, Giuseppe V; Cesano, Alessandra; Hawtin, Rachael; Janetzki, Sylvia; Zhang, Jenny; Kirsch, Ilan; Dobbin, Kevin K; Alvarez, John; Robbins, Paul B; Selvan, Senthamil R; Streicher, Howard Z; Butterfield, Lisa H; Thurin, Magdalena

    2016-01-01

    Immunotherapies have emerged as one of the most promising approaches to treat patients with cancer. Recently, there have been many clinical successes using checkpoint receptor blockade, including T cell inhibitory receptors such as cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) and programmed cell death-1 (PD-1). Despite demonstrated successes in a variety of malignancies, responses only typically occur in a minority of patients in any given histology. Additionally, treatment is associated with inflammatory toxicity and high cost. Therefore, determining which patients would derive clinical benefit from immunotherapy is a compelling clinical question. Although numerous candidate biomarkers have been described, there are currently three FDA-approved assays based on PD-1 ligand expression (PD-L1) that have been clinically validated to identify patients who are more likely to benefit from a single-agent anti-PD-1/PD-L1 therapy. Because of the complexity of the immune response and tumor biology, it is unlikely that a single biomarker will be sufficient to predict clinical outcomes in response to immune-targeted therapy. Rather, the integration of multiple tumor and immune response parameters, such as protein expression, genomics, and transcriptomics, may be necessary for accurate prediction of clinical benefit. Before a candidate biomarker and/or new technology can be used in a clinical setting, several steps are necessary to demonstrate its clinical validity. Although regulatory guidelines provide general roadmaps for the validation process, their applicability to biomarkers in the cancer immunotherapy field is somewhat limited. Thus, Working Group 1 (WG1) of the Society for Immunotherapy of Cancer (SITC) Immune Biomarkers Task Force convened to address this need. In this two volume series, we discuss pre-analytical and analytical (Volume I) as well as clinical and regulatory (Volume II) aspects of the validation process as applied to predictive biomarkers

  2. Predictive analytics and child protection: constraints and opportunities.

    PubMed

    Russell, Jesse

    2015-08-01

    This paper considers how predictive analytics might inform, assist, and improve decision making in child protection. Predictive analytics represents recent increases in data quantity and data diversity, along with advances in computing technology. While the use of data and statistical modeling is not new to child protection decision making, its use in child protection is experiencing growth, and efforts to leverage predictive analytics for better decision-making in child protection are increasing. Past experiences, constraints and opportunities are reviewed. For predictive analytics to make the most impact on child protection practice and outcomes, it must embrace established criteria of validity, equity, reliability, and usefulness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Validating Analytical Methods

    ERIC Educational Resources Information Center

    Ember, Lois R.

    1977-01-01

    The procedures utilized by the Association of Official Analytical Chemists (AOAC) to develop, evaluate, and validate analytical methods for the analysis of chemical pollutants are detailed. Methods validated by AOAC are used by the EPA and FDA in their enforcement programs and are granted preferential treatment by the courts. (BT)

  4. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics

    PubMed Central

    2016-01-01

    Background We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. Objective To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. Methods The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Results Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence

  5. IBM Watson Analytics: Automating Visualization, Descriptive, and Predictive Statistics.

    PubMed

    Hoyt, Robert Eugene; Snider, Dallas; Thompson, Carla; Mantravadi, Sarita

    2016-10-11

    We live in an era of explosive data generation that will continue to grow and involve all industries. One of the results of this explosion is the need for newer and more efficient data analytics procedures. Traditionally, data analytics required a substantial background in statistics and computer science. In 2015, International Business Machines Corporation (IBM) released the IBM Watson Analytics (IBMWA) software that delivered advanced statistical procedures based on the Statistical Package for the Social Sciences (SPSS). The latest entry of Watson Analytics into the field of analytical software products provides users with enhanced functions that are not available in many existing programs. For example, Watson Analytics automatically analyzes datasets, examines data quality, and determines the optimal statistical approach. Users can request exploratory, predictive, and visual analytics. Using natural language processing (NLP), users are able to submit additional questions for analyses in a quick response format. This analytical package is available free to academic institutions (faculty and students) that plan to use the tools for noncommercial purposes. To report the features of IBMWA and discuss how this software subjectively and objectively compares to other data mining programs. The salient features of the IBMWA program were examined and compared with other common analytical platforms, using validated health datasets. Using a validated dataset, IBMWA delivered similar predictions compared with several commercial and open source data mining software applications. The visual analytics generated by IBMWA were similar to results from programs such as Microsoft Excel and Tableau Software. In addition, assistance with data preprocessing and data exploration was an inherent component of the IBMWA application. Sensitivity and specificity were not included in the IBMWA predictive analytics results, nor were odds ratios, confidence intervals, or a confusion matrix

  6. MetaKTSP: a meta-analytic top scoring pair method for robust cross-study validation of omics prediction analysis.

    PubMed

    Kim, SungHwan; Lin, Chien-Wei; Tseng, George C

    2016-07-01

    Supervised machine learning is widely applied to transcriptomic data to predict disease diagnosis, prognosis or survival. Robust and interpretable classifiers with high accuracy are usually favored for their clinical and translational potential. The top scoring pair (TSP) algorithm is an example that applies a simple rank-based algorithm to identify rank-altered gene pairs for classifier construction. Although many classification methods perform well in cross-validation of single expression profile, the performance usually greatly reduces in cross-study validation (i.e. the prediction model is established in the training study and applied to an independent test study) for all machine learning methods, including TSP. The failure of cross-study validation has largely diminished the potential translational and clinical values of the models. The purpose of this article is to develop a meta-analytic top scoring pair (MetaKTSP) framework that combines multiple transcriptomic studies and generates a robust prediction model applicable to independent test studies. We proposed two frameworks, by averaging TSP scores or by combining P-values from individual studies, to select the top gene pairs for model construction. We applied the proposed methods in simulated data sets and three large-scale real applications in breast cancer, idiopathic pulmonary fibrosis and pan-cancer methylation. The result showed superior performance of cross-study validation accuracy and biomarker selection for the new meta-analytic framework. In conclusion, combining multiple omics data sets in the public domain increases robustness and accuracy of the classification model that will ultimately improve disease understanding and clinical treatment decisions to benefit patients. An R package MetaKTSP is available online. (http://tsenglab.biostat.pitt.edu/software.htm). ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All

  7. Empirically Optimized Flow Cytometric Immunoassay Validates Ambient Analyte Theory

    PubMed Central

    Parpia, Zaheer A.; Kelso, David M.

    2010-01-01

    Ekins’ ambient analyte theory predicts, counter intuitively, that an immunoassay’s limit of detection can be improved by reducing the amount of capture antibody. In addition, it also anticipates that results should be insensitive to the volume of sample as well as the amount of capture antibody added. The objective of this study is to empirically validate all of the performance characteristics predicted by Ekins’ theory. Flow cytometric analysis was used to detect binding between a fluorescent ligand and capture microparticles since it can directly measure fractional occupancy, the primary response variable in ambient analyte theory. After experimentally determining ambient analyte conditions, comparisons were carried out between ambient and non-ambient assays in terms of their signal strengths, limits of detection, and their sensitivity to variations in reaction volume and number of particles. The critical number of binding sites required for an assay to be in the ambient analyte region was estimated to be 0.1VKd. As predicted, such assays exhibited superior signal/noise levels and limits of detection; and were not affected by variations in sample volume and number of binding sites. When the signal detected measures fractional occupancy, ambient analyte theory is an excellent guide to developing assays with superior performance characteristics. PMID:20152793

  8. The analytical validation of the Oncotype DX Recurrence Score assay

    PubMed Central

    Baehner, Frederick L

    2016-01-01

    In vitro diagnostic multivariate index assays are highly complex molecular assays that can provide clinically actionable information regarding the underlying tumour biology and facilitate personalised treatment. These assays are only useful in clinical practice if all of the following are established: analytical validation (i.e., how accurately/reliably the assay measures the molecular characteristics), clinical validation (i.e., how consistently/accurately the test detects/predicts the outcomes of interest), and clinical utility (i.e., how likely the test is to significantly improve patient outcomes). In considering the use of these assays, clinicians often focus primarily on the clinical validity/utility; however, the analytical validity of an assay (e.g., its accuracy, reproducibility, and standardisation) should also be evaluated and carefully considered. This review focuses on the rigorous analytical validation and performance of the Oncotype DX® Breast Cancer Assay, which is performed at the Central Clinical Reference Laboratory of Genomic Health, Inc. The assay process includes tumour tissue enrichment (if needed), RNA extraction, gene expression quantitation (using a gene panel consisting of 16 cancer genes plus 5 reference genes and quantitative real-time RT-PCR), and an automated computer algorithm to produce a Recurrence Score® result (scale: 0–100). This review presents evidence showing that the Recurrence Score result reported for each patient falls within a tight clinically relevant confidence interval. Specifically, the review discusses how the development of the assay was designed to optimise assay performance, presents data supporting its analytical validity, and describes the quality control and assurance programmes that ensure optimal test performance over time. PMID:27729940

  9. The analytical validation of the Oncotype DX Recurrence Score assay.

    PubMed

    Baehner, Frederick L

    2016-01-01

    In vitro diagnostic multivariate index assays are highly complex molecular assays that can provide clinically actionable information regarding the underlying tumour biology and facilitate personalised treatment. These assays are only useful in clinical practice if all of the following are established: analytical validation (i.e., how accurately/reliably the assay measures the molecular characteristics), clinical validation (i.e., how consistently/accurately the test detects/predicts the outcomes of interest), and clinical utility (i.e., how likely the test is to significantly improve patient outcomes). In considering the use of these assays, clinicians often focus primarily on the clinical validity/utility; however, the analytical validity of an assay (e.g., its accuracy, reproducibility, and standardisation) should also be evaluated and carefully considered. This review focuses on the rigorous analytical validation and performance of the Oncotype DX ® Breast Cancer Assay, which is performed at the Central Clinical Reference Laboratory of Genomic Health, Inc. The assay process includes tumour tissue enrichment (if needed), RNA extraction, gene expression quantitation (using a gene panel consisting of 16 cancer genes plus 5 reference genes and quantitative real-time RT-PCR), and an automated computer algorithm to produce a Recurrence Score ® result (scale: 0-100). This review presents evidence showing that the Recurrence Score result reported for each patient falls within a tight clinically relevant confidence interval. Specifically, the review discusses how the development of the assay was designed to optimise assay performance, presents data supporting its analytical validity, and describes the quality control and assurance programmes that ensure optimal test performance over time.

  10. Configuration and validation of an analytical model predicting secondary neutron radiation in proton therapy using Monte Carlo simulations and experimental measurements.

    PubMed

    Farah, J; Bonfrate, A; De Marzi, L; De Oliveira, A; Delacroix, S; Martinetti, F; Trompier, F; Clairand, I

    2015-05-01

    This study focuses on the configuration and validation of an analytical model predicting leakage neutron doses in proton therapy. Using Monte Carlo (MC) calculations, a facility-specific analytical model was built to reproduce out-of-field neutron doses while separately accounting for the contribution of intra-nuclear cascade, evaporation, epithermal and thermal neutrons. This model was first trained to reproduce in-water neutron absorbed doses and in-air neutron ambient dose equivalents, H*(10), calculated using MCNPX. Its capacity in predicting out-of-field doses at any position not involved in the training phase was also checked. The model was next expanded to enable a full 3D mapping of H*(10) inside the treatment room, tested in a clinically relevant configuration and finally consolidated with experimental measurements. Following the literature approach, the work first proved that it is possible to build a facility-specific analytical model that efficiently reproduces in-water neutron doses and in-air H*(10) values with a maximum difference less than 25%. In addition, the analytical model succeeded in predicting out-of-field neutron doses in the lateral and vertical direction. Testing the analytical model in clinical configurations proved the need to separate the contribution of internal and external neutrons. The impact of modulation width on stray neutrons was found to be easily adjustable while beam collimation remains a challenging issue. Finally, the model performance agreed with experimental measurements with satisfactory results considering measurement and simulation uncertainties. Analytical models represent a promising solution that substitutes for time-consuming MC calculations when assessing doses to healthy organs. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  11. The forensic validity of visual analytics

    NASA Astrophysics Data System (ADS)

    Erbacher, Robert F.

    2008-01-01

    The wider use of visualization and visual analytics in wide ranging fields has led to the need for visual analytics capabilities to be legally admissible, especially when applied to digital forensics. This brings the need to consider legal implications when performing visual analytics, an issue not traditionally examined in visualization and visual analytics techniques and research. While digital data is generally admissible under the Federal Rules of Evidence [10][21], a comprehensive validation of the digital evidence is considered prudent. A comprehensive validation requires validation of the digital data under rules for authentication, hearsay, best evidence rule, and privilege. Additional issues with digital data arise when exploring digital data related to admissibility and the validity of what information was examined, to what extent, and whether the analysis process was sufficiently covered by a search warrant. For instance, a search warrant generally covers very narrow requirements as to what law enforcement is allowed to examine and acquire during an investigation. When searching a hard drive for child pornography, how admissible is evidence of an unrelated crime, i.e. drug dealing. This is further complicated by the concept of "in plain view". When performing an analysis of a hard drive what would be considered "in plain view" when analyzing a hard drive. The purpose of this paper is to discuss the issues of digital forensics and the related issues as they apply to visual analytics and identify how visual analytics techniques fit into the digital forensics analysis process, how visual analytics techniques can improve the legal admissibility of digital data, and identify what research is needed to further improve this process. The goal of this paper is to open up consideration of legal ramifications among the visualization community; the author is not a lawyer and the discussions are not meant to be inclusive of all differences in laws between states and

  12. Analytical and experimental validation of the Oblique Detonation Wave Engine concept

    NASA Technical Reports Server (NTRS)

    Adelman, Henry G.; Cambier, Jean-Luc; Menees, Gene P.; Balboni, John A.

    1988-01-01

    The Oblique Detonation Wave Engine (ODWE) for hypersonic flight has been analytically studied by NASA using the CFD codes which fully couple finite rate chemistry with fluid dynamics. Fuel injector designs investigated included wall and strut injectors, and the in-stream strut injectors were chosen to provide good mixing with minimal stagnation pressure losses. Plans for experimentally validating the ODWE concept in an arc-jet hypersonic wind tunnel are discussed. Measurements of the flow field properties behind the oblique wave will be compared to analytical predictions.

  13. Multi-analyte validation in heterogeneous solution by ELISA.

    PubMed

    Lakshmipriya, Thangavel; Gopinath, Subash C B; Hashim, Uda; Murugaiyah, Vikneswaran

    2017-12-01

    Enzyme Linked Immunosorbent Assay (ELISA) is a standard assay that has been used widely to validate the presence of analyte in the solution. With the advancement of ELISA, different strategies have shown and became a suitable immunoassay for a wide range of analytes. Herein, we attempted to provide additional evidence with ELISA, to show its suitability for multi-analyte detection. To demonstrate, three clinically relevant targets have been chosen, which include 16kDa protein from Mycobacterium tuberculosis, human blood clotting Factor IXa and a tumour marker Squamous Cell Carcinoma antigen. Indeed, we adapted the routine steps from the conventional ELISA to validate the occurrence of analytes both in homogeneous and heterogeneous solutions. With the homogeneous and heterogeneous solutions, we could attain the sensitivity of 2, 8 and 1nM for the targets 16kDa protein, FIXa and SSC antigen, respectively. Further, the specific multi-analyte validations were evidenced with the similar sensitivities in the presence of human serum. ELISA assay in this study has proven its applicability for the genuine multiple target validation in the heterogeneous solution, can be followed for other target validations. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Improved partition equilibrium model for predicting analyte response in electrospray ionization mass spectrometry.

    PubMed

    Du, Lihong; White, Robert L

    2009-02-01

    A previously proposed partition equilibrium model for quantitative prediction of analyte response in electrospray ionization mass spectrometry is modified to yield an improved linear relationship. Analyte mass spectrometer response is modeled by a competition mechanism between analyte and background electrolytes that is based on partition equilibrium considerations. The correlation between analyte response and solution composition is described by the linear model over a wide concentration range and the improved model is shown to be valid for a wide range of experimental conditions. The behavior of an analyte in a salt solution, which could not be explained by the original model, is correctly predicted. The ion suppression effects of 16:0 lysophosphatidylcholine (LPC) on analyte signals are attributed to a combination of competition for excess charge and reduction of total charge due to surface tension effects. In contrast to the complicated mathematical forms that comprise the original model, the simplified model described here can more easily be employed to predict analyte mass spectrometer responses for solutions containing multiple components. Copyright (c) 2008 John Wiley & Sons, Ltd.

  15. Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction

    NASA Technical Reports Server (NTRS)

    Lee, Seongkyu; Brentner, Kenneth S.; Farassat, F.; Morris, Philip J.

    2008-01-01

    Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. The pressure gradient can be used to solve the boundary condition for scattering problems and it is a key aspect to solve acoustic scattering problems. The first formulation is derived from the gradient of the Ffowcs Williams-Hawkings (FW-H) equation. This formulation has a form involving the observer time differentiation outside the integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. This formulation avoids the numerical time differentiation with respect to the observer time, which is computationally more efficient. The acoustic pressure gradient predicted by these new formulations is validated through comparison with available exact solutions for a stationary and moving monopole sources. The agreement between the predictions and exact solutions is excellent. The formulations are applied to the rotor noise problems for two model rotors. A purely numerical approach is compared with the analytical formulations. The agreement between the analytical formulations and the numerical method is excellent for both stationary and moving observer cases.

  16. Analytical procedure validation and the quality by design paradigm.

    PubMed

    Rozet, Eric; Lebrun, Pierre; Michiels, Jean-François; Sondag, Perceval; Scherder, Tara; Boulanger, Bruno

    2015-01-01

    Since the adoption of the ICH Q8 document concerning the development of pharmaceutical processes following a quality by design (QbD) approach, there have been many discussions on the opportunity for analytical procedure developments to follow a similar approach. While development and optimization of analytical procedure following QbD principles have been largely discussed and described, the place of analytical procedure validation in this framework has not been clarified. This article aims at showing that analytical procedure validation is fully integrated into the QbD paradigm and is an essential step in developing analytical procedures that are effectively fit for purpose. Adequate statistical methodologies have also their role to play: such as design of experiments, statistical modeling, and probabilistic statements. The outcome of analytical procedure validation is also an analytical procedure design space, and from it, control strategy can be set.

  17. Thermal conductivity of microporous layers: Analytical modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Andisheh-Tadbir, Mehdi; Kjeang, Erik; Bahrami, Majid

    2015-11-01

    A new compact relationship is developed for the thermal conductivity of the microporous layer (MPL) used in polymer electrolyte fuel cells as a function of pore size distribution, porosity, and compression pressure. The proposed model is successfully validated against experimental data obtained from a transient plane source thermal constants analyzer. The thermal conductivities of carbon paper samples with and without MPL were measured as a function of load (1-6 bars) and the MPL thermal conductivity was found between 0.13 and 0.17 W m-1 K-1. The proposed analytical model predicts the experimental thermal conductivities within 5%. A correlation generated from the analytical model was used in a multi objective genetic algorithm to predict the pore size distribution and porosity for an MPL with optimized thermal conductivity and mass diffusivity. The results suggest that an optimized MPL, in terms of heat and mass transfer coefficients, has an average pore size of 122 nm and 63% porosity.

  18. Exploring the Potential of Predictive Analytics and Big Data in Emergency Care.

    PubMed

    Janke, Alexander T; Overbeek, Daniel L; Kocher, Keith E; Levy, Phillip D

    2016-02-01

    Clinical research often focuses on resource-intensive causal inference, whereas the potential of predictive analytics with constantly increasing big data sources remains largely unexplored. Basic prediction, divorced from causal inference, is much easier with big data. Emergency care may benefit from this simpler application of big data. Historically, predictive analytics have played an important role in emergency care as simple heuristics for risk stratification. These tools generally follow a standard approach: parsimonious criteria, easy computability, and independent validation with distinct populations. Simplicity in a prediction tool is valuable, but technological advances make it no longer a necessity. Emergency care could benefit from clinical predictions built using data science tools with abundant potential input variables available in electronic medical records. Patients' risks could be stratified more precisely with large pools of data and lower resource requirements for comparing each clinical encounter to those that came before it, benefiting clinical decisionmaking and health systems operations. The largest value of predictive analytics comes early in the clinical encounter, in which diagnostic and prognostic uncertainty are high and resource-committing decisions need to be made. We propose an agenda for widening the application of predictive analytics in emergency care. Throughout, we express cautious optimism because there are myriad challenges related to database infrastructure, practitioner uptake, and patient acceptance. The quality of routinely compiled clinical data will remain an important limitation. Complementing big data sources with prospective data may be necessary if predictive analytics are to achieve their full potential to improve care quality in the emergency department. Copyright © 2015 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  19. Analytical modeling and experimental validation of a magnetorheological mount

    NASA Astrophysics Data System (ADS)

    Nguyen, The; Ciocanel, Constantin; Elahinia, Mohammad

    2009-03-01

    Magnetorheological (MR) fluid has been increasingly researched and applied in vibration isolation devices. To date, the suspension system of several high performance vehicles has been equipped with MR fluid based dampers and research is ongoing to develop MR fluid based mounts for engine and powertrain isolation. MR fluid based devices have received attention due to the MR fluid's capability to change its properties in the presence of a magnetic field. This characteristic places MR mounts in the class of semiactive isolators making them a desirable substitution for the passive hydraulic mounts. In this research, an analytical model of a mixed-mode MR mount was constructed. The magnetorheological mount employs flow (valve) mode and squeeze mode. Each mode is powered by an independent electromagnet, so one mode does not affect the operation of the other. The analytical model was used to predict the performance of the MR mount with different sets of parameters. Furthermore, in order to produce the actual prototype, the analytical model was used to identify the optimal geometry of the mount. The experimental phase of this research was carried by fabricating and testing the actual MR mount. The manufactured mount was tested to evaluate the effectiveness of each mode individually and in combination. The experimental results were also used to validate the ability of the analytical model in predicting the response of the MR mount. Based on the observed response of the mount a suitable controller can be designed for it. However, the control scheme is not addressed in this study.

  20. Ethical leadership: meta-analytic evidence of criterion-related and incremental validity.

    PubMed

    Ng, Thomas W H; Feldman, Daniel C

    2015-05-01

    This study examines the criterion-related and incremental validity of ethical leadership (EL) with meta-analytic data. Across 101 samples published over the last 15 years (N = 29,620), we observed that EL demonstrated acceptable criterion-related validity with variables that tap followers' job attitudes, job performance, and evaluations of their leaders. Further, followers' trust in the leader mediated the relationships of EL with job attitudes and performance. In terms of incremental validity, we found that EL significantly, albeit weakly in some cases, predicted task performance, citizenship behavior, and counterproductive work behavior-even after controlling for the effects of such variables as transformational leadership, use of contingent rewards, management by exception, interactional fairness, and destructive leadership. The article concludes with a discussion of ways to strengthen the incremental validity of EL. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  1. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    PubMed Central

    Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.

    2017-01-01

    Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237

  2. Validation of biomarkers to predict response to immunotherapy in cancer: Volume II - clinical validation and regulatory considerations.

    PubMed

    Dobbin, Kevin K; Cesano, Alessandra; Alvarez, John; Hawtin, Rachael; Janetzki, Sylvia; Kirsch, Ilan; Masucci, Giuseppe V; Robbins, Paul B; Selvan, Senthamil R; Streicher, Howard Z; Zhang, Jenny; Butterfield, Lisa H; Thurin, Magdalena

    2016-01-01

    There is growing recognition that immunotherapy is likely to significantly improve health outcomes for cancer patients in the coming years. Currently, while a subset of patients experience substantial clinical benefit in response to different immunotherapeutic approaches, the majority of patients do not but are still exposed to the significant drug toxicities. Therefore, a growing need for the development and clinical use of predictive biomarkers exists in the field of cancer immunotherapy. Predictive cancer biomarkers can be used to identify the patients who are or who are not likely to derive benefit from specific therapeutic approaches. In order to be applicable in a clinical setting, predictive biomarkers must be carefully shepherded through a step-wise, highly regulated developmental process. Volume I of this two-volume document focused on the pre-analytical and analytical phases of the biomarker development process, by providing background, examples and "good practice" recommendations. In the current Volume II, the focus is on the clinical validation, validation of clinical utility and regulatory considerations for biomarker development. Together, this two volume series is meant to provide guidance on the entire biomarker development process, with a particular focus on the unique aspects of developing immune-based biomarkers. Specifically, knowledge about the challenges to clinical validation of predictive biomarkers, which has been gained from numerous successes and failures in other contexts, will be reviewed together with statistical methodological issues related to bias and overfitting. The different trial designs used for the clinical validation of biomarkers will also be discussed, as the selection of clinical metrics and endpoints becomes critical to establish the clinical utility of the biomarker during the clinical validation phase of the biomarker development. Finally, the regulatory aspects of submission of biomarker assays to the U.S. Food and

  3. Experimental validation of an analytical kinetic model for edge-localized modes in JET-ITER-like wall

    NASA Astrophysics Data System (ADS)

    Guillemaut, C.; Metzger, C.; Moulton, D.; Heinola, K.; O’Mullane, M.; Balboa, I.; Boom, J.; Matthews, G. F.; Silburn, S.; Solano, E. R.; contributors, JET

    2018-06-01

    The design and operation of future fusion devices relying on H-mode plasmas requires reliable modelling of edge-localized modes (ELMs) for precise prediction of divertor target conditions. An extensive experimental validation of simple analytical predictions of the time evolution of target plasma loads during ELMs has been carried out here in more than 70 JET-ITER-like wall H-mode experiments with a wide range of conditions. Comparisons of these analytical predictions with diagnostic measurements of target ion flux density, power density, impact energy and electron temperature during ELMs are presented in this paper and show excellent agreement. The analytical predictions tested here are made with the ‘free-streaming’ kinetic model (FSM) which describes ELMs as a quasi-neutral plasma bunch expanding along the magnetic field lines into the Scrape-Off Layer without collisions. Consequences of the FSM on energy reflection and deposition on divertor targets during ELMs are also discussed.

  4. Structurally compliant rocket engine combustion chamber: Experimental and analytical validation

    NASA Technical Reports Server (NTRS)

    Jankovsky, Robert S.; Arya, Vinod K.; Kazaroff, John M.; Halford, Gary R.

    1994-01-01

    A new, structurally compliant rocket engine combustion chamber design has been validated through analysis and experiment. Subscale, tubular channel chambers have been cyclically tested and analytically evaluated. Cyclic lives were determined to have a potential for 1000 percent increase over those of rectangular channel designs, the current state of the art. Greater structural compliance in the circumferential direction gave rise to lower thermal strains during hot firing, resulting in lower thermal strain ratcheting and longer predicted fatigue lives. Thermal, structural, and durability analyses of the combustion chamber design, involving cyclic temperatures, strains, and low-cycle fatigue lives, have corroborated the experimental observations.

  5. Analytic Formulation and Numerical Implementation of an Acoustic Pressure Gradient Prediction

    NASA Technical Reports Server (NTRS)

    Lee, Seongkyu; Brentner, Kenneth S.; Farassat, Fereidoun

    2007-01-01

    The scattering of rotor noise is an area that has received little attention over the years, yet the limited work that has been done has shown that both the directivity and intensity of the acoustic field may be significantly modified by the presence of scattering bodies. One of the inputs needed to compute the scattered acoustic field is the acoustic pressure gradient on a scattering surface. Two new analytical formulations of the acoustic pressure gradient have been developed and implemented in the PSU-WOPWOP rotor noise prediction code. These formulations are presented in this paper. The first formulation is derived by taking the gradient of Farassat's retarded-time Formulation 1A. Although this formulation is relatively simple, it requires numerical time differentiation of the acoustic integrals. In the second formulation, the time differentiation is taken inside the integrals analytically. The acoustic pressure gradient predicted by these new formulations is validated through comparison with the acoustic pressure gradient determined by a purely numerical approach for two model rotors. The agreement between analytic formulations and numerical method is excellent for both stationary and moving observers case.

  6. MICROORGANISMS IN BIOSOLIDS: ANALYTICAL METHODS DEVELOPMENT, STANDARDIZATION, AND VALIDATION

    EPA Science Inventory

    The objective of this presentation is to discuss pathogens of concern in biosolids, the analytical techniques used to evaluate microorganisms in biosolids, and to discuss standardization and validation of analytical protocols for microbes within such a complex matrix. Implicatio...

  7. Design Characteristics Influence Performance of Clinical Prediction Rules in Validation: A Meta-Epidemiological Study.

    PubMed

    Ban, Jong-Wook; Emparanza, José Ignacio; Urreta, Iratxe; Burls, Amanda

    2016-01-01

    Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules' performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics. Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described. A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2-4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively. Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved.

  8. Predicting playing frequencies for clarinets: A comparison between numerical simulations and simplified analytical formulas.

    PubMed

    Coyle, Whitney L; Guillemain, Philippe; Kergomard, Jean; Dalmont, Jean-Pierre

    2015-11-01

    When designing a wind instrument such as a clarinet, it can be useful to be able to predict the playing frequencies. This paper presents an analytical method to deduce these playing frequencies using the input impedance curve. Specifically there are two control parameters that have a significant influence on the playing frequency, the blowing pressure and reed opening. Four effects are known to alter the playing frequency and are examined separately: the flow rate due to the reed motion, the reed dynamics, the inharmonicity of the resonator, and the temperature gradient within the clarinet. The resulting playing frequencies for the first register of a particular professional level clarinet are found using the analytical formulas presented in this paper. The analytical predictions are then compared to numerically simulated results to validate the prediction accuracy. The main conclusion is that in general the playing frequency decreases above the oscillation threshold because of inharmonicity, then increases above the beating reed regime threshold because of the decrease of the flow rate effect.

  9. Two-Speed Gearbox Dynamic Simulation Predictions and Test Validation

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; DeSmidt, Hans; Smith, Edward C.; Bauman, Steven W.

    2010-01-01

    Dynamic simulations and experimental validation tests were performed on a two-stage, two-speed gearbox as part of the drive system research activities of the NASA Fundamental Aeronautics Subsonics Rotary Wing Project. The gearbox was driven by two electromagnetic motors and had two electromagnetic, multi-disk clutches to control output speed. A dynamic model of the system was created which included a direct current electric motor with proportional-integral-derivative (PID) speed control, a two-speed gearbox with dual electromagnetically actuated clutches, and an eddy current dynamometer. A six degree-of-freedom model of the gearbox accounted for the system torsional dynamics and included gear, clutch, shaft, and load inertias as well as shaft flexibilities and a dry clutch stick-slip friction model. Experimental validation tests were performed on the gearbox in the NASA Glenn gear noise test facility. Gearbox output speed and torque as well as drive motor speed and current were compared to those from the analytical predictions. The experiments correlate very well with the predictions, thus validating the dynamic simulation methodologies.

  10. Design Characteristics Influence Performance of Clinical Prediction Rules in Validation: A Meta-Epidemiological Study

    PubMed Central

    Ban, Jong-Wook; Emparanza, José Ignacio; Urreta, Iratxe; Burls, Amanda

    2016-01-01

    Background Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules’ performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics. Methods Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described. Results A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2–4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively. Conclusion Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved. PMID:26730980

  11. Validation of an online risk calculator for the prediction of anastomotic leak after colon cancer surgery and preliminary exploration of artificial intelligence-based analytics.

    PubMed

    Sammour, T; Cohen, L; Karunatillake, A I; Lewis, M; Lawrence, M J; Hunter, A; Moore, J W; Thomas, M L

    2017-11-01

    Recently published data support the use of a web-based risk calculator ( www.anastomoticleak.com ) for the prediction of anastomotic leak after colectomy. The aim of this study was to externally validate this calculator on a larger dataset. Consecutive adult patients undergoing elective or emergency colectomy for colon cancer at a single institution over a 9-year period were identified using the Binational Colorectal Cancer Audit database. Patients with a rectosigmoid cancer, an R2 resection, or a diverting ostomy were excluded. The primary outcome was anastomotic leak within 90 days as defined by previously published criteria. Area under receiver operating characteristic curve (AUROC) was derived and compared with that of the American College of Surgeons National Surgical Quality Improvement Program ® (ACS NSQIP) calculator and the colon leakage score (CLS) calculator for left colectomy. Commercially available artificial intelligence-based analytics software was used to further interrogate the prediction algorithm. A total of 626 patients were identified. Four hundred and fifty-six patients met the inclusion criteria, and 402 had complete data available for all the calculator variables (126 had a left colectomy). Laparoscopic surgery was performed in 39.6% and emergency surgery in 14.7%. The anastomotic leak rate was 7.2%, with 31.0% requiring reoperation. The anastomoticleak.com calculator was significantly predictive of leak and performed better than the ACS NSQIP calculator (AUROC 0.73 vs 0.58) and the CLS calculator (AUROC 0.96 vs 0.80) for left colectomy. Artificial intelligence-predictive analysis supported these findings and identified an improved prediction model. The anastomotic leak risk calculator is significantly predictive of anastomotic leak after colon cancer resection. Wider investigation of artificial intelligence-based analytics for risk prediction is warranted.

  12. Validating Analytical Protocols to Determine Selected Pesticides and PCBs Using Routine Samples.

    PubMed

    Pindado Jiménez, Oscar; García Alonso, Susana; Pérez Pastor, Rosa María

    2017-01-01

    This study aims at providing recommendations concerning the validation of analytical protocols by using routine samples. It is intended to provide a case-study on how to validate the analytical methods in different environmental matrices. In order to analyze the selected compounds (pesticides and polychlorinated biphenyls) in two different environmental matrices, the current work has performed and validated two analytical procedures by GC-MS. A description is given of the validation of the two protocols by the analysis of more than 30 samples of water and sediments collected along nine months. The present work also scopes the uncertainty associated with both analytical protocols. In detail, uncertainty of water sample was performed through a conventional approach. However, for the sediments matrices, the estimation of proportional/constant bias is also included due to its inhomogeneity. Results for the sediment matrix are reliable, showing a range 25-35% of analytical variability associated with intermediate conditions. The analytical methodology for the water matrix determines the selected compounds with acceptable recoveries and the combined uncertainty ranges between 20 and 30%. Analyzing routine samples is rarely applied to assess trueness of novel analytical methods and up to now this methodology was not focused on organochlorine compounds in environmental matrices.

  13. Using predictive analytics and big data to optimize pharmaceutical outcomes.

    PubMed

    Hernandez, Inmaculada; Zhang, Yuting

    2017-09-15

    The steps involved, the resources needed, and the challenges associated with applying predictive analytics in healthcare are described, with a review of successful applications of predictive analytics in implementing population health management interventions that target medication-related patient outcomes. In healthcare, the term big data typically refers to large quantities of electronic health record, administrative claims, and clinical trial data as well as data collected from smartphone applications, wearable devices, social media, and personal genomics services; predictive analytics refers to innovative methods of analysis developed to overcome challenges associated with big data, including a variety of statistical techniques ranging from predictive modeling to machine learning to data mining. Predictive analytics using big data have been applied successfully in several areas of medication management, such as in the identification of complex patients or those at highest risk for medication noncompliance or adverse effects. Because predictive analytics can be used in predicting different outcomes, they can provide pharmacists with a better understanding of the risks for specific medication-related problems that each patient faces. This information will enable pharmacists to deliver interventions tailored to patients' needs. In order to take full advantage of these benefits, however, clinicians will have to understand the basics of big data and predictive analytics. Predictive analytics that leverage big data will become an indispensable tool for clinicians in mapping interventions and improving patient outcomes. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  14. Analytic Validation of Immunohistochemical Assays: A Comparison of Laboratory Practices Before and After Introduction of an Evidence-Based Guideline.

    PubMed

    Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Souers, Rhona J; Fatheree, Lisa A; Volmar, Keith E; Stuart, Lauren N; Nowak, Jan A; Astles, J Rex; Nakhleh, Raouf E

    2017-09-01

    - Laboratories must demonstrate analytic validity before any test can be used clinically, but studies have shown inconsistent practices in immunohistochemical assay validation. - To assess changes in immunohistochemistry analytic validation practices after publication of an evidence-based laboratory practice guideline. - A survey on current immunohistochemistry assay validation practices and on the awareness and adoption of a recently published guideline was sent to subscribers enrolled in one of 3 relevant College of American Pathologists proficiency testing programs and to additional nonsubscribing laboratories that perform immunohistochemical testing. The results were compared with an earlier survey of validation practices. - Analysis was based on responses from 1085 laboratories that perform immunohistochemical staining. Of 1057 responses, 65.4% (691) were aware of the guideline recommendations before this survey was sent and 79.9% (550 of 688) of those have already adopted some or all of the recommendations. Compared with the 2010 survey, a significant number of laboratories now have written validation procedures for both predictive and nonpredictive marker assays and specifications for the minimum numbers of cases needed for validation. There was also significant improvement in compliance with validation requirements, with 99% (100 of 102) having validated their most recently introduced predictive marker assay, compared with 74.9% (326 of 435) in 2010. The difficulty in finding validation cases for rare antigens and resource limitations were cited as the biggest challenges in implementing the guideline. - Dissemination of the 2014 evidence-based guideline validation practices had a positive impact on laboratory performance; some or all of the recommendations have been adopted by nearly 80% of respondents.

  15. The legal and ethical concerns that arise from using complex predictive analytics in health care.

    PubMed

    Cohen, I Glenn; Amarasingham, Ruben; Shah, Anand; Xie, Bin; Lo, Bernard

    2014-07-01

    Predictive analytics, or the use of electronic algorithms to forecast future events in real time, makes it possible to harness the power of big data to improve the health of patients and lower the cost of health care. However, this opportunity raises policy, ethical, and legal challenges. In this article we analyze the major challenges to implementing predictive analytics in health care settings and make broad recommendations for overcoming challenges raised in the four phases of the life cycle of a predictive analytics model: acquiring data to build the model, building and validating it, testing it in real-world settings, and disseminating and using it more broadly. For instance, we recommend that model developers implement governance structures that include patients and other stakeholders starting in the earliest phases of development. In addition, developers should be allowed to use already collected patient data without explicit consent, provided that they comply with federal regulations regarding research on human subjects and the privacy of health information. Project HOPE—The People-to-People Health Foundation, Inc.

  16. Direct Validation of Differential Prediction.

    ERIC Educational Resources Information Center

    Lunneborg, Clifford E.

    Using academic achievement data for 655 University students, direct validation of differential predictions based on a battery of aptitude/achievement measures selected for their differential prediction efficiency was attempted. In the cross-validation of the prediction of actual differences among five academic area GPA's, this set of differential…

  17. Healthcare predictive analytics: An overview with a focus on Saudi Arabia.

    PubMed

    Alharthi, Hana

    2018-03-08

    Despite a newfound wealth of data and information, the healthcare sector is lacking in actionable knowledge. This is largely because healthcare data, though plentiful, tends to be inherently complex and fragmented. Health data analytics, with an emphasis on predictive analytics, is emerging as a transformative tool that can enable more proactive and preventative treatment options. This review considers the ways in which predictive analytics has been applied in the for-profit business sector to generate well-timed and accurate predictions of key outcomes, with a focus on key features that may be applicable to healthcare-specific applications. Published medical research presenting assessments of predictive analytics technology in medical applications are reviewed, with particular emphasis on how hospitals have integrated predictive analytics into their day-to-day healthcare services to improve quality of care. This review also highlights the numerous challenges of implementing predictive analytics in healthcare settings and concludes with a discussion of current efforts to implement healthcare data analytics in the developing country, Saudi Arabia. Copyright © 2018 The Author. Published by Elsevier Ltd.. All rights reserved.

  18. A closed-form analytical model for predicting 3D boundary layer displacement thickness for the validation of viscous flow solvers

    NASA Astrophysics Data System (ADS)

    Kumar, V. R. Sanal; Sankar, Vigneshwaran; Chandrasekaran, Nichith; Saravanan, Vignesh; Natarajan, Vishnu; Padmanabhan, Sathyan; Sukumaran, Ajith; Mani, Sivabalan; Rameshkumar, Tharikaa; Nagaraju Doddi, Hema Sai; Vysaprasad, Krithika; Sharan, Sharad; Murugesh, Pavithra; Shankar, S. Ganesh; Nejaamtheen, Mohammed Niyasdeen; Baskaran, Roshan Vignesh; Rahman Mohamed Rafic, Sulthan Ariff; Harisrinivasan, Ukeshkumar; Srinivasan, Vivek

    2018-02-01

    A closed-form analytical model is developed for estimating the 3D boundary-layer-displacement thickness of an internal flow system at the Sanal flow choking condition for adiabatic flows obeying the physics of compressible viscous fluids. At this unique condition the boundary-layer blockage induced fluid-throat choking and the adiabatic wall-friction persuaded flow choking occur at a single sonic-fluid-throat location. The beauty and novelty of this model is that without missing the flow physics we could predict the exact boundary-layer blockage of both 2D and 3D cases at the sonic-fluid-throat from the known values of the inlet Mach number, the adiabatic index of the gas and the inlet port diameter of the internal flow system. We found that the 3D blockage factor is 47.33 % lower than the 2D blockage factor with air as the working fluid. We concluded that the exact prediction of the boundary-layer-displacement thickness at the sonic-fluid-throat provides a means to correctly pinpoint the causes of errors of the viscous flow solvers. The methodology presented herein with state-of-the-art will play pivotal roles in future physical and biological sciences for a credible verification, calibration and validation of various viscous flow solvers for high-fidelity 2D/3D numerical simulations of real-world flows. Furthermore, our closed-form analytical model will be useful for the solid and hybrid rocket designers for the grain-port-geometry optimization of new generation single-stage-to-orbit dual-thrust-motors with the highest promising propellant loading density within the given envelope without manifestation of the Sanal flow choking leading to possible shock waves causing catastrophic failures.

  19. Influence versus intent for predictive analytics in situation awareness

    NASA Astrophysics Data System (ADS)

    Cui, Biru; Yang, Shanchieh J.; Kadar, Ivan

    2013-05-01

    Predictive analytics in situation awareness requires an element to comprehend and anticipate potential adversary activities that might occur in the future. Most work in high level fusion or predictive analytics utilizes machine learning, pattern mining, Bayesian inference, and decision tree techniques to predict future actions or states. The emergence of social computing in broader contexts has drawn interests in bringing the hypotheses and techniques from social theory to algorithmic and computational settings for predictive analytics. This paper aims at answering the question on how influence and attitude (some interpreted such as intent) of adversarial actors can be formulated and computed algorithmically, as a higher level fusion process to provide predictions of future actions. The challenges in this interdisciplinary endeavor include drawing existing understanding of influence and attitude in both social science and computing fields, as well as the mathematical and computational formulation for the specific context of situation to be analyzed. The study of `influence' has resurfaced in recent years due to the emergence of social networks in the virtualized cyber world. Theoretical analysis and techniques developed in this area are discussed in this paper in the context of predictive analysis. Meanwhile, the notion of intent, or `attitude' using social theory terminologies, is a relatively uncharted area in the computing field. Note that a key objective of predictive analytics is to identify impending/planned attacks so their `impact' and `threat' can be prevented. In this spirit, indirect and direct observables are drawn and derived to infer the influence network and attitude to predict future threats. This work proposes an integrated framework that jointly assesses adversarial actors' influence network and their attitudes as a function of past actions and action outcomes. A preliminary set of algorithms are developed and tested using the Global Terrorism

  20. Validation of finite element and boundary element methods for predicting structural vibration and radiated noise

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Wu, X. F.; Oswald, Fred B.

    1992-01-01

    Analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise radiated from the box. The FEM was used to predict the vibration, and the surface vibration was used as input to the BEM to predict the sound intensity and sound power. Vibration predicted by the FEM model was validated by experimental modal analysis. Noise predicted by the BEM was validated by sound intensity measurements. Three types of results are presented for the total radiated sound power: (1) sound power predicted by the BEM modeling using vibration data measured on the surface of the box; (2) sound power predicted by the FEM/BEM model; and (3) sound power measured by a sound intensity scan. The sound power predicted from the BEM model using measured vibration data yields an excellent prediction of radiated noise. The sound power predicted by the combined FEM/BEM model also gives a good prediction of radiated noise except for a shift of the natural frequencies that are due to limitations in the FEM model.

  1. Pavement Performance : Approaches Using Predictive Analytics

    DOT National Transportation Integrated Search

    2018-03-23

    Acceptable pavement condition is paramount to road safety. Using predictive analytics techniques, this project attempted to develop models that provide an assessment of pavement condition based on an array of indictors that include pavement distress,...

  2. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  3. Analytical Performance Modeling and Validation of Intel’s Xeon Phi Architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chunduri, Sudheer; Balaprakash, Prasanna; Morozov, Vitali

    Modeling the performance of scientific applications on emerging hardware plays a central role in achieving extreme-scale computing goals. Analytical models that capture the interaction between applications and hardware characteristics are attractive because even a reasonably accurate model can be useful for performance tuning before the hardware is made available. In this paper, we develop a hardware model for Intel’s second-generation Xeon Phi architecture code-named Knights Landing (KNL) for the SKOPE framework. We validate the KNL hardware model by projecting the performance of mini-benchmarks and application kernels. The results show that our KNL model can project the performance with prediction errorsmore » of 10% to 20%. The hardware model also provides informative recommendations for code transformations and tuning.« less

  4. Predictive Big Data Analytics: A Study of Parkinson's Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations.

    PubMed

    Dinov, Ivo D; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W; Price, Nathan D; Van Horn, John D; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M; Dauer, William; Toga, Arthur W

    2016-01-01

    A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches

  5. Development and validation of a numerical acoustic analysis program for aircraft interior noise prediction

    NASA Astrophysics Data System (ADS)

    Garcea, Ralph; Leigh, Barry; Wong, R. L. M.

    Reduction of interior noise in propeller-driven aircraft, to levels comparable with those obtained in jet transports, has become a leading factor in the early design stages of the new generation turboprops- and may be essential if these new designs are to succeed. The need for an analytical capability to predict interior noise is accepted throughout the turboprop aircraft industry. To this end, an analytical noise prediction program, which incorporates the SYSNOISE numerical acoustic analysis software, is under development at de Havilland. The discussion contained herein looks at the development program and how it was used in a design sensitivity analysis to optimize the structural design of the aircraft cabin for the purpose of reducing interior noise levels. This report also summarizes the validation of the SYSNOISE package using numerous classical cases from the literature.

  6. Geospatial Analytics in Retail Site Selection and Sales Prediction.

    PubMed

    Ting, Choo-Yee; Ho, Chiung Ching; Yee, Hui Jia; Matsah, Wan Razali

    2018-03-01

    Studies have shown that certain features from geography, demography, trade area, and environment can play a vital role in retail site selection, largely due to the impact they asserted on retail performance. Although the relevant features could be elicited by domain experts, determining the optimal feature set can be intractable and labor-intensive exercise. The challenges center around (1) how to determine features that are important to a particular retail business and (2) how to estimate retail sales performance given a new location? The challenges become apparent when the features vary across time. In this light, this study proposed a nonintervening approach by employing feature selection algorithms and subsequently sales prediction through similarity-based methods. The results of prediction were validated by domain experts. In this study, data sets from different sources were transformed and aggregated before an analytics data set that is ready for analysis purpose could be obtained. The data sets included data about feature location, population count, property type, education status, and monthly sales from 96 branches of a telecommunication company in Malaysia. The finding suggested that (1) optimal retail performance can only be achieved through fulfillment of specific location features together with the surrounding trade area characteristics and (2) similarity-based method can provide solution to retail sales prediction.

  7. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    PubMed

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current

  8. Predictive Big Data Analytics: A Study of Parkinson’s Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations

    PubMed Central

    Dinov, Ivo D.; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W.; Price, Nathan D.; Van Horn, John D.; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M.; Dauer, William; Toga, Arthur W.

    2016-01-01

    Background A unique archive of Big Data on Parkinson’s Disease is collected, managed and disseminated by the Parkinson’s Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson’s disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data–large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources–all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Methods and Findings Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several

  9. Harmonization of strategies for the validation of quantitative analytical procedures. A SFSTP proposal--Part I.

    PubMed

    Hubert, Ph; Nguyen-Huu, J-J; Boulanger, B; Chapuzet, E; Chiap, P; Cohen, N; Compagnon, P-A; Dewé, W; Feinberg, M; Lallier, M; Laurentie, M; Mercier, N; Muzard, G; Nivet, C; Valat, L

    2004-11-15

    This paper is the first part of a summary report of a new commission of the Société Française des Sciences et Techniques Pharmaceutiques (SFSTP). The main objective of this commission was the harmonization of approaches for the validation of quantitative analytical procedures. Indeed, the principle of the validation of theses procedures is today widely spread in all the domains of activities where measurements are made. Nevertheless, this simple question of acceptability or not of an analytical procedure for a given application, remains incompletely determined in several cases despite the various regulations relating to the good practices (GLP, GMP, ...) and other documents of normative character (ISO, ICH, FDA, ...). There are many official documents describing the criteria of validation to be tested, but they do not propose any experimental protocol and limit themselves most often to the general concepts. For those reasons, two previous SFSTP commissions elaborated validation guides to concretely help the industrial scientists in charge of drug development to apply those regulatory recommendations. If these two first guides widely contributed to the use and progress of analytical validations, they present, nevertheless, weaknesses regarding the conclusions of the performed statistical tests and the decisions to be made with respect to the acceptance limits defined by the use of an analytical procedure. The present paper proposes to review even the bases of the analytical validation for developing harmonized approach, by distinguishing notably the diagnosis rules and the decision rules. This latter rule is based on the use of the accuracy profile, uses the notion of total error and allows to simplify the approach of the validation of an analytical procedure while checking the associated risk to its usage. Thanks to this novel validation approach, it is possible to unambiguously demonstrate the fitness for purpose of a new method as stated in all regulatory

  10. Introduction to Validation of Analytical Methods: Potentiometric Determination of CO[subscript 2

    ERIC Educational Resources Information Center

    Hipólito-Nájera, A. Ricardo; Moya-Hernandez, M. Rosario; Gomez-Balderas, Rodolfo; Rojas-Hernandez, Alberto; Romero-Romo, Mario

    2017-01-01

    Validation of analytical methods is a fundamental subject for chemical analysts working in chemical industries. These methods are also relevant for pharmaceutical enterprises, biotechnology firms, analytical service laboratories, government departments, and regulatory agencies. Therefore, for undergraduate students enrolled in majors in the field…

  11. PAUSE: Predictive Analytics Using SPARQL-Endpoints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas R; Ainsworth, Keela; Bond, Nathaniel

    2014-07-11

    This invention relates to the medical industry and more specifically to methods of predicting risks. With the impetus towards personalized and evidence-based medicine, the need for a framework to analyze/interpret quantitative measurements (blood work, toxicology, etc.) with qualitative descriptions (specialist reports after reading images, bio-medical knowledgebase, etc.) to predict diagnostic risks is fast emerging. We describe a software solution that leverages hardware for scalable in-memory analytics and applies next-generation semantic query tools on medical data.

  12. An analytical solution for predicting the transient seepage from a subsurface drainage system

    NASA Astrophysics Data System (ADS)

    Xin, Pei; Dan, Han-Cheng; Zhou, Tingzhang; Lu, Chunhui; Kong, Jun; Li, Ling

    2016-05-01

    Subsurface drainage systems have been widely used to deal with soil salinization and waterlogging problems around the world. In this paper, a mathematical model was introduced to quantify the transient behavior of the groundwater table and the seepage from a subsurface drainage system. Based on the assumption of a hydrostatic pressure distribution, the model considered the pore-water flow in both the phreatic and vadose soil zones. An approximate analytical solution for the model was derived to quantify the drainage of soils which were initially water-saturated. The analytical solution was validated against laboratory experiments and a 2-D Richards equation-based model, and found to predict well the transient water seepage from the subsurface drainage system. A saturated flow-based model was also tested and found to over-predict the time required for drainage and the total water seepage by nearly one order of magnitude, in comparison with the experimental results and the present analytical solution. During drainage, a vadose zone with a significant water storage capacity developed above the phreatic surface. A considerable amount of water still remained in the vadose zone at the steady state with the water table situated at the drain bottom. Sensitivity analyses demonstrated that effects of the vadose zone were intensified with an increased thickness of capillary fringe, capillary rise and/or burying depth of drains, in terms of the required drainage time and total water seepage. The analytical solution provides guidance for assessing the capillary effects on the effectiveness and efficiency of subsurface drainage systems for combating soil salinization and waterlogging problems.

  13. Automated Predictive Big Data Analytics Using Ontology Based Semantics.

    PubMed

    Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A

    2015-10-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.

  14. Automated Predictive Big Data Analytics Using Ontology Based Semantics

    PubMed Central

    Nural, Mustafa V.; Cotterell, Michael E.; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A.

    2017-01-01

    Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology. PMID:29657954

  15. Predictive Analytics to Support Real-Time Management in Pathology Facilities.

    PubMed

    Lessard, Lysanne; Michalowski, Wojtek; Chen Li, Wei; Amyot, Daniel; Halwani, Fawaz; Banerjee, Diponkar

    2016-01-01

    Predictive analytics can provide valuable support to the effective management of pathology facilities. The introduction of new tests and technologies in anatomical pathology will increase the volume of specimens to be processed, as well as the complexity of pathology processes. In order for predictive analytics to address managerial challenges associated with the volume and complexity increases, it is important to pinpoint the areas where pathology managers would most benefit from predictive capabilities. We illustrate common issues in managing pathology facilities with an analysis of the surgical specimen process at the Department of Pathology and Laboratory Medicine (DPLM) at The Ottawa Hospital, which processes all surgical specimens for the Eastern Ontario Regional Laboratory Association. We then show how predictive analytics could be used to support management. Our proposed approach can be generalized beyond the DPLM, contributing to a more effective management of pathology facilities and in turn to quicker clinical diagnoses.

  16. Validation of Analytical Damping Ratio by Fatigue Stress Limit

    NASA Astrophysics Data System (ADS)

    Foong, Faruq Muhammad; Chung Ket, Thein; Beng Lee, Ooi; Aziz, Abdul Rashid Abdul

    2018-03-01

    The optimisation process of a vibration energy harvester is usually restricted to experimental approaches due to the lack of an analytical equation to describe the damping of a system. This study derives an analytical equation, which describes the first mode damping ratio of a clamp-free cantilever beam under harmonic base excitation by combining the transverse equation of motion of the beam with the damping-stress equation. This equation, as opposed to other common damping determination methods, is independent of experimental inputs or finite element simulations and can be solved using a simple iterative convergence method. The derived equation was determined to be correct for cases when the maximum bending stress in the beam is below the fatigue limit stress of the beam. However, an increasing trend in the error between the experiment and the analytical results were observed at high stress levels. Hence, the fatigue limit stress was used as a parameter to define the validity of the analytical equation.

  17. Population Spotting Using Big Data: Validating the Human Performance Concept of Operations Analytic Vision

    DTIC Science & Technology

    2017-01-01

    AFRL-SA-WP-SR-2017-0001 Population Spotting Using “ Big Data ”: Validating the Human Performance Concept of Operations Analytic Vision...TITLE AND SUBTITLE Population Spotting Using “ Big Data ”: Validating the Human Performance Concept of Operations Analytic Vision 5a. CONTRACT...STINFO COPY NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data included in this document for any

  18. Prediction, Detection, and Validation of Isotope Clusters in Mass Spectrometry Data

    PubMed Central

    Treutler, Hendrik; Neumann, Steffen

    2016-01-01

    Mass spectrometry is a key analytical platform for metabolomics. The precise quantification and identification of small molecules is a prerequisite for elucidating the metabolism and the detection, validation, and evaluation of isotope clusters in LC-MS data is important for this task. Here, we present an approach for the improved detection of isotope clusters using chemical prior knowledge and the validation of detected isotope clusters depending on the substance mass using database statistics. We find remarkable improvements regarding the number of detected isotope clusters and are able to predict the correct molecular formula in the top three ranks in 92% of the cases. We make our methodology freely available as part of the Bioconductor packages xcms version 1.50.0 and CAMERA version 1.30.0. PMID:27775610

  19. Towards a full integration of optimization and validation phases: An analytical-quality-by-design approach.

    PubMed

    Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph

    2015-05-22

    When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved

  20. Experimentally validated mathematical model of analyte uptake by permeation passive samplers.

    PubMed

    Salim, F; Ioannidis, M; Górecki, T

    2017-11-15

    A mathematical model describing the sampling process in a permeation-based passive sampler was developed and evaluated numerically. The model was applied to the Waterloo Membrane Sampler (WMS), which employs a polydimethylsiloxane (PDMS) membrane as a permeation barrier, and an adsorbent as a receiving phase. Samplers of this kind are used for sampling volatile organic compounds (VOC) from air and soil gas. The model predicts the spatio-temporal variation of sorbed and free analyte concentrations within the sampler components (membrane, sorbent bed and dead volume), from which the uptake rate throughout the sampling process can be determined. A gradual decline in the uptake rate during the sampling process is predicted, which is more pronounced when sampling higher concentrations. Decline of the uptake rate can be attributed to diminishing analyte concentration gradient within the membrane, which results from resistance to mass transfer and the development of analyte concentration gradients within the sorbent bed. The effects of changing the sampler component dimensions on the rate of this decline in the uptake rate can be predicted from the model. Performance of the model was evaluated experimentally for sampling of toluene vapors under controlled conditions. The model predictions proved close to the experimental values. The model provides a valuable tool to predict changes in the uptake rate during sampling, to assign suitable exposure times at different analyte concentration levels, and to optimize the dimensions of the sampler in a manner that minimizes these changes during the sampling period.

  1. Predictive Analytics to Support Real-Time Management in Pathology Facilities

    PubMed Central

    Lessard, Lysanne; Michalowski, Wojtek; Chen Li, Wei; Amyot, Daniel; Halwani, Fawaz; Banerjee, Diponkar

    2016-01-01

    Predictive analytics can provide valuable support to the effective management of pathology facilities. The introduction of new tests and technologies in anatomical pathology will increase the volume of specimens to be processed, as well as the complexity of pathology processes. In order for predictive analytics to address managerial challenges associated with the volume and complexity increases, it is important to pinpoint the areas where pathology managers would most benefit from predictive capabilities. We illustrate common issues in managing pathology facilities with an analysis of the surgical specimen process at the Department of Pathology and Laboratory Medicine (DPLM) at The Ottawa Hospital, which processes all surgical specimens for the Eastern Ontario Regional Laboratory Association. We then show how predictive analytics could be used to support management. Our proposed approach can be generalized beyond the DPLM, contributing to a more effective management of pathology facilities and in turn to quicker clinical diagnoses. PMID:28269873

  2. Validation of a Deterministic Vibroacoustic Response Prediction Model

    NASA Technical Reports Server (NTRS)

    Caimi, Raoul E.; Margasahayam, Ravi

    1997-01-01

    This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.

  3. How health leaders can benefit from predictive analytics.

    PubMed

    Giga, Aliyah

    2017-11-01

    Predictive analytics can support a better integrated health system providing continuous, coordinated, and comprehensive person-centred care to those who could benefit most. In addition to dollars saved, using a predictive model in healthcare can generate opportunities for meaningful improvements in efficiency, productivity, costs, and better population health with targeted interventions toward patients at risk.

  4. Construct Validation of Analytic Rating Scales in a Speaking Assessment: Reporting a Score Profile and a Composite

    ERIC Educational Resources Information Center

    Sawaki, Yasuyo

    2007-01-01

    This is a construct validation study of a second language speaking assessment that reported a language profile based on analytic rating scales and a composite score. The study addressed three key issues: score dependability, convergent/discriminant validity of analytic rating scales and the weighting of analytic ratings in the composite score.…

  5. Technosocial Predictive Analytics in Support of Naturalistic Decision Making

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.; Cowell, Andrew J.; Malone, Elizabeth L.

    2009-06-23

    A main challenge we face in fostering sustainable growth is to anticipate outcomes through predictive and proactive across domains as diverse as energy, security, the environment, health and finance in order to maximize opportunities, influence outcomes and counter adversities. The goal of this paper is to present new methods for anticipatory analytical thinking which address this challenge through the development of a multi-perspective approach to predictive modeling as a core to a creative decision making process. This approach is uniquely multidisciplinary in that it strives to create decision advantage through the integration of human and physical models, and leverages knowledgemore » management and visual analytics to support creative thinking by facilitating the achievement of interoperable knowledge inputs and enhancing the user’s cognitive access. We describe a prototype system which implements this approach and exemplify its functionality with reference to a use case in which predictive modeling is paired with analytic gaming to support collaborative decision-making in the domain of agricultural land management.« less

  6. Proactive Supply Chain Performance Management with Predictive Analytics

    PubMed Central

    Stefanovic, Nenad

    2014-01-01

    Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment. PMID:25386605

  7. Proactive supply chain performance management with predictive analytics.

    PubMed

    Stefanovic, Nenad

    2014-01-01

    Today's business climate requires supply chains to be proactive rather than reactive, which demands a new approach that incorporates data mining predictive analytics. This paper introduces a predictive supply chain performance management model which combines process modelling, performance measurement, data mining models, and web portal technologies into a unique model. It presents the supply chain modelling approach based on the specialized metamodel which allows modelling of any supply chain configuration and at different level of details. The paper also presents the supply chain semantic business intelligence (BI) model which encapsulates data sources and business rules and includes the data warehouse model with specific supply chain dimensions, measures, and KPIs (key performance indicators). Next, the paper describes two generic approaches for designing the KPI predictive data mining models based on the BI semantic model. KPI predictive models were trained and tested with a real-world data set. Finally, a specialized analytical web portal which offers collaborative performance monitoring and decision making is presented. The results show that these models give very accurate KPI projections and provide valuable insights into newly emerging trends, opportunities, and problems. This should lead to more intelligent, predictive, and responsive supply chains capable of adapting to future business environment.

  8. Teaching Analytical Method Transfer through Developing and Validating Then Transferring Dissolution Testing Methods for Pharmaceuticals

    ERIC Educational Resources Information Center

    Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette

    2017-01-01

    Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…

  9. Implementing Operational Analytics using Big Data Technologies to Detect and Predict Sensor Anomalies

    NASA Astrophysics Data System (ADS)

    Coughlin, J.; Mital, R.; Nittur, S.; SanNicolas, B.; Wolf, C.; Jusufi, R.

    2016-09-01

    Operational analytics when combined with Big Data technologies and predictive techniques have been shown to be valuable in detecting mission critical sensor anomalies that might be missed by conventional analytical techniques. Our approach helps analysts and leaders make informed and rapid decisions by analyzing large volumes of complex data in near real-time and presenting it in a manner that facilitates decision making. It provides cost savings by being able to alert and predict when sensor degradations pass a critical threshold and impact mission operations. Operational analytics, which uses Big Data tools and technologies, can process very large data sets containing a variety of data types to uncover hidden patterns, unknown correlations, and other relevant information. When combined with predictive techniques, it provides a mechanism to monitor and visualize these data sets and provide insight into degradations encountered in large sensor systems such as the space surveillance network. In this study, data from a notional sensor is simulated and we use big data technologies, predictive algorithms and operational analytics to process the data and predict sensor degradations. This study uses data products that would commonly be analyzed at a site. This study builds on a big data architecture that has previously been proven valuable in detecting anomalies. This paper outlines our methodology of implementing an operational analytic solution through data discovery, learning and training of data modeling and predictive techniques, and deployment. Through this methodology, we implement a functional architecture focused on exploring available big data sets and determine practical analytic, visualization, and predictive technologies.

  10. Experimental validation of finite element and boundary element methods for predicting structural vibration and radiated noise

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Wu, T. W.; Wu, X. F.

    1994-01-01

    This research report is presented in three parts. In the first part, acoustical analyses were performed on modes of vibration of the housing of a transmission of a gear test rig developed by NASA. The modes of vibration of the transmission housing were measured using experimental modal analysis. The boundary element method (BEM) was used to calculate the sound pressure and sound intensity on the surface of the housing and the radiation efficiency of each mode. The radiation efficiency of each of the transmission housing modes was then compared to theoretical results for a finite baffled plate. In the second part, analytical and experimental validation of methods to predict structural vibration and radiated noise are presented. A rectangular box excited by a mechanical shaker was used as a vibrating structure. Combined finite element method (FEM) and boundary element method (BEM) models of the apparatus were used to predict the noise level radiated from the box. The FEM was used to predict the vibration, while the BEM was used to predict the sound intensity and total radiated sound power using surface vibration as the input data. Vibration predicted by the FEM model was validated by experimental modal analysis; noise predicted by the BEM was validated by measurements of sound intensity. Three types of results are presented for the total radiated sound power: sound power predicted by the BEM model using vibration data measured on the surface of the box; sound power predicted by the FEM/BEM model; and sound power measured by an acoustic intensity scan. In the third part, the structure used in part two was modified. A rib was attached to the top plate of the structure. The FEM and BEM were then used to predict structural vibration and radiated noise respectively. The predicted vibration and radiated noise were then validated through experimentation.

  11. Validating Semi-analytic Models of High-redshift Galaxy Formation Using Radiation Hydrodynamical Simulations

    NASA Astrophysics Data System (ADS)

    Côté, Benoit; Silvia, Devin W.; O’Shea, Brian W.; Smith, Britton; Wise, John H.

    2018-05-01

    We use a cosmological hydrodynamic simulation calculated with Enzo and the semi-analytic galaxy formation model (SAM) GAMMA to address the chemical evolution of dwarf galaxies in the early universe. The long-term goal of the project is to better understand the origin of metal-poor stars and the formation of dwarf galaxies and the Milky Way halo by cross-validating these theoretical approaches. We combine GAMMA with the merger tree of the most massive galaxy found in the hydrodynamic simulation and compare the star formation rate, the metallicity distribution function (MDF), and the age–metallicity relationship predicted by the two approaches. We found that the SAM can reproduce the global trends of the hydrodynamic simulation. However, there are degeneracies between the model parameters, and more constraints (e.g., star formation efficiency, gas flows) need to be extracted from the simulation to isolate the correct semi-analytic solution. Stochastic processes such as bursty star formation histories and star formation triggered by supernova explosions cannot be reproduced by the current version of GAMMA. Non-uniform mixing in the galaxy’s interstellar medium, coming primarily from self-enrichment by local supernovae, causes a broadening in the MDF that can be emulated in the SAM by convolving its predicted MDF with a Gaussian function having a standard deviation of ∼0.2 dex. We found that the most massive galaxy in the simulation retains nearby 100% of its baryonic mass within its virial radius, which is in agreement with what is needed in GAMMA to reproduce the global trends of the simulation.

  12. Analytic cognitive style predicts religious and paranormal belief.

    PubMed

    Pennycook, Gordon; Cheyne, James Allan; Seli, Paul; Koehler, Derek J; Fugelsang, Jonathan A

    2012-06-01

    An analytic cognitive style denotes a propensity to set aside highly salient intuitions when engaging in problem solving. We assess the hypothesis that an analytic cognitive style is associated with a history of questioning, altering, and rejecting (i.e., unbelieving) supernatural claims, both religious and paranormal. In two studies, we examined associations of God beliefs, religious engagement (attendance at religious services, praying, etc.), conventional religious beliefs (heaven, miracles, etc.) and paranormal beliefs (extrasensory perception, levitation, etc.) with performance measures of cognitive ability and analytic cognitive style. An analytic cognitive style negatively predicted both religious and paranormal beliefs when controlling for cognitive ability as well as religious engagement, sex, age, political ideology, and education. Participants more willing to engage in analytic reasoning were less likely to endorse supernatural beliefs. Further, an association between analytic cognitive style and religious engagement was mediated by religious beliefs, suggesting that an analytic cognitive style negatively affects religious engagement via lower acceptance of conventional religious beliefs. Results for types of God belief indicate that the association between an analytic cognitive style and God beliefs is more nuanced than mere acceptance and rejection, but also includes adopting less conventional God beliefs, such as Pantheism or Deism. Our data are consistent with the idea that two people who share the same cognitive ability, education, political ideology, sex, age and level of religious engagement can acquire very different sets of beliefs about the world if they differ in their propensity to think analytically. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Analytical Finite Element Simulation Model for Structural Crashworthiness Prediction

    DOT National Transportation Integrated Search

    1974-02-01

    The analytical development and appropriate derivations are presented for a simulation model of vehicle crashworthiness prediction. Incremental equations governing the nonlinear elasto-plastic dynamic response of three-dimensional frame structures are...

  14. Validation of an advanced analytical procedure applied to the measurement of environmental radioactivity.

    PubMed

    Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van

    2018-04-01

    In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance.

    PubMed

    Kepes, Sven; McDaniel, Michael A

    2015-01-01

    Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation.

  16. The Promise and Peril of Predictive Analytics in Higher Education: A Landscape Analysis

    ERIC Educational Resources Information Center

    Ekowo, Manuela; Palmer, Iris

    2016-01-01

    Predictive analytics in higher education is a hot-button topic among educators and administrators as institutions strive to better serve students by becoming more data-informed. In this paper, the authors describe how predictive analytics are used in higher education to identify students who need extra support, steer students in courses they will…

  17. Assessment of analytical techniques for predicting solid propellant exhaust plumes

    NASA Technical Reports Server (NTRS)

    Tevepaugh, J. A.; Smith, S. D.; Penny, M. M.

    1977-01-01

    The calculation of solid propellant exhaust plume flow fields is addressed. Two major areas covered are: (1) the applicability of empirical data currently available to define particle drag coefficients, heat transfer coefficients, mean particle size and particle size distributions, and (2) thermochemical modeling of the gaseous phase of the flow field. Comparisons of experimentally measured and analytically predicted data are made. The experimental data were obtained for subscale solid propellant motors with aluminum loadings of 2, 10 and 15%. Analytical predictions were made using a fully coupled two-phase numerical solution. Data comparisons will be presented for radial distributions at plume axial stations of 5, 12, 16 and 20 diameters.

  18. Analytic prediction of unconfined boundary layer flashback limits in premixed hydrogen-air flames

    NASA Astrophysics Data System (ADS)

    Hoferichter, Vera; Hirsch, Christoph; Sattelmayer, Thomas

    2017-05-01

    Flame flashback is a major challenge in premixed combustion. Hence, the prediction of the minimum flow velocity to prevent boundary layer flashback is of high technical interest. This paper presents an analytic approach to predicting boundary layer flashback limits for channel and tube burners. The model reflects the experimentally observed flashback mechanism and consists of a local and global analysis. Based on the local analysis, the flow velocity at flashback initiation is obtained depending on flame angle and local turbulent burning velocity. The local turbulent burning velocity is calculated in accordance with a predictive model for boundary layer flashback limits of duct-confined flames presented by the authors in an earlier publication. This ensures consistency of both models. The flame angle of the stable flame near flashback conditions can be obtained by various methods. In this study, an approach based on global mass conservation is applied and is validated using Mie-scattering images from a channel burner test rig at ambient conditions. The predicted flashback limits are compared to experimental results and to literature data from preheated tube burner experiments. Finally, a method for including the effect of burner exit temperature is demonstrated and used to explain the discrepancies in flashback limits obtained from different burner configurations reported in the literature.

  19. Continuous Metabolic Monitoring Based on Multi-Analyte Biomarkers to Predict Exhaustion

    PubMed Central

    Kastellorizios, Michail; Burgess, Diane J.

    2015-01-01

    This work introduces the concept of multi-analyte biomarkers for continuous metabolic monitoring. The importance of using more than one marker lies in the ability to obtain a holistic understanding of the metabolism. This is showcased for the detection and prediction of exhaustion during intense physical exercise. The findings presented here indicate that when glucose and lactate changes over time are combined into multi-analyte biomarkers, their monitoring trends are more sensitive in the subcutaneous tissue, an implantation-friendly peripheral tissue, compared to the blood. This unexpected observation was confirmed in normal as well as type 1 diabetic rats. This study was designed to be of direct value to continuous monitoring biosensor research, where single analytes are typically monitored. These findings can be implemented in new multi-analyte continuous monitoring technologies for more accurate insulin dosing, as well as for exhaustion prediction studies based on objective data rather than the subject’s perception. PMID:26028477

  20. Continuous metabolic monitoring based on multi-analyte biomarkers to predict exhaustion.

    PubMed

    Kastellorizios, Michail; Burgess, Diane J

    2015-06-01

    This work introduces the concept of multi-analyte biomarkers for continuous metabolic monitoring. The importance of using more than one marker lies in the ability to obtain a holistic understanding of the metabolism. This is showcased for the detection and prediction of exhaustion during intense physical exercise. The findings presented here indicate that when glucose and lactate changes over time are combined into multi-analyte biomarkers, their monitoring trends are more sensitive in the subcutaneous tissue, an implantation-friendly peripheral tissue, compared to the blood. This unexpected observation was confirmed in normal as well as type 1 diabetic rats. This study was designed to be of direct value to continuous monitoring biosensor research, where single analytes are typically monitored. These findings can be implemented in new multi-analyte continuous monitoring technologies for more accurate insulin dosing, as well as for exhaustion prediction studies based on objective data rather than the subject's perception.

  1. Evaluating the Predictive Validity of the Computerized Comprehension Task: Comprehension Predicts Production

    PubMed Central

    Friend, Margaret; Schmitt, Sara A.; Simpson, Adrianne M.

    2017-01-01

    Until recently, the challenges inherent in measuring comprehension have impeded our ability to predict the course of language acquisition. The present research reports on a longitudinal assessment of the convergent and predictive validity of the CDI: Words and Gestures and the Computerized Comprehension Task (CCT). The CDI: WG and the CCT evinced good convergent validity however the CCT better predicted subsequent parent reports of language production. Language sample data in the third year confirm this finding: the CCT accounted for 24% of the variance in unique word use. These studies provide evidence for the utility of a behavior-based approach to predicting the course of language acquisition into production. PMID:21928878

  2. Risk analysis of analytical validations by probabilistic modification of FMEA.

    PubMed

    Barends, D M; Oldenhof, M T; Vredenbregt, M J; Nauta, M J

    2012-05-01

    Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Risk analysis by FMEA as an element of analytical validation.

    PubMed

    van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Oldenhof, M T; Vredenbregt, M J; Barends, D M

    2009-12-05

    We subjected a Near-Infrared (NIR) analytical procedure used for screening drugs on authenticity to a Failure Mode and Effects Analysis (FMEA), including technical risks as well as risks related to human failure. An FMEA team broke down the NIR analytical method into process steps and identified possible failure modes for each step. Each failure mode was ranked on estimated frequency of occurrence (O), probability that the failure would remain undetected later in the process (D) and severity (S), each on a scale of 1-10. Human errors turned out to be the most common cause of failure modes. Failure risks were calculated by Risk Priority Numbers (RPNs)=O x D x S. Failure modes with the highest RPN scores were subjected to corrective actions and the FMEA was repeated, showing reductions in RPN scores and resulting in improvement indices up to 5.0. We recommend risk analysis as an addition to the usual analytical validation, as the FMEA enabled us to detect previously unidentified risks.

  4. Analytical prediction of digital signal crosstalk of FCC

    NASA Technical Reports Server (NTRS)

    Belleisle, A. P.

    1972-01-01

    The results are presented of study effort whose aim was the development of accurate means of analyzing and predicting signal cross-talk in multi-wire digital data cables. A complete analytical model is developed n + 1 wire systems of uniform transmission lines with arbitrary linear boundary conditions. In addition, a minimum set of parameter measurements required for the application of the model are presented. Comparisons between cross-talk predicted by this model and actual measured cross-talk are shown for a six conductor ribbon cable.

  5. Validation protocol of analytical procedures for quantification of drugs in polymeric systems for parenteral administration: dexamethasone phosphate disodium microparticles.

    PubMed

    Martín-Sabroso, Cristina; Tavares-Fernandes, Daniel Filipe; Espada-García, Juan Ignacio; Torres-Suárez, Ana Isabel

    2013-12-15

    In this work a protocol to validate analytical procedures for the quantification of drug substances formulated in polymeric systems that comprise both drug entrapped into the polymeric matrix (assay:content test) and drug released from the systems (assay:dissolution test) is developed. This protocol is applied to the validation two isocratic HPLC analytical procedures for the analysis of dexamethasone phosphate disodium microparticles for parenteral administration. Preparation of authentic samples and artificially "spiked" and "unspiked" samples is described. Specificity (ability to quantify dexamethasone phosphate disodium in presence of constituents of the dissolution medium and other microparticle constituents), linearity, accuracy and precision are evaluated, in the range from 10 to 50 μg mL(-1) in the assay:content test procedure and from 0.25 to 10 μg mL(-1) in the assay:dissolution test procedure. The robustness of the analytical method to extract drug from microparticles is also assessed. The validation protocol developed allows us to conclude that both analytical methods are suitable for their intended purpose, but the lack of proportionality of the assay:dissolution analytical method should be taken into account. The validation protocol designed in this work could be applied to the validation of any analytical procedure for the quantification of drugs formulated in controlled release polymeric microparticles. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Testing a 1-D Analytical Salt Intrusion Model and the Predictive Equation in Malaysian Estuaries

    NASA Astrophysics Data System (ADS)

    Gisen, Jacqueline Isabella; Savenije, Hubert H. G.

    2013-04-01

    Little is known about the salt intrusion behaviour in Malaysian estuaries. Study on this topic sometimes requires large amounts of data especially if a 2-D or 3-D numerical models are used for analysis. In poor data environments, 1-D analytical models are more appropriate. For this reason, a fully analytical 1-D salt intrusion model, based on the theory of Savenije in 2005, was tested in three Malaysian estuaries (Bernam, Selangor and Muar) because it is simple and requires minimal data. In order to achieve that, site surveys were conducted in these estuaries during the dry season (June-August) at spring tide by moving boat technique. Data of cross-sections, water levels and salinity were collected, and then analysed with the salt intrusion model. This paper demonstrates a good fit between the simulated and observed salinity distribution for all three estuaries. Additionally, the calibrated Van der Burgh's coefficient K, Dispersion coefficient D0, and salt intrusion length L, for the estuaries also displayed a reasonable correlations with those calculated from the predictive equations. This indicates that not only is the salt intrusion model valid for the case studies in Malaysia but also the predictive model. Furthermore, the results from this study describe the current state of the estuaries with which the Malaysian water authority in Malaysia can make decisions on limiting water abstraction or dredging. Keywords: salt intrusion, Malaysian estuaries, discharge, predictive model, dispersion

  7. Consistency of FMEA used in the validation of analytical procedures.

    PubMed

    Oldenhof, M T; van Leeuwen, J F; Nauta, M J; de Kaste, D; Odekerken-Rombouts, Y M C F; Vredenbregt, M J; Weda, M; Barends, D M

    2011-02-20

    In order to explore the consistency of the outcome of a Failure Mode and Effects Analysis (FMEA) in the validation of analytical procedures, an FMEA was carried out by two different teams. The two teams applied two separate FMEAs to a High Performance Liquid Chromatography-Diode Array Detection-Mass Spectrometry (HPLC-DAD-MS) analytical procedure used in the quality control of medicines. Each team was free to define their own ranking scales for the probability of severity (S), occurrence (O), and detection (D) of failure modes. We calculated Risk Priority Numbers (RPNs) and we identified the failure modes above the 90th percentile of RPN values as failure modes needing urgent corrective action; failure modes falling between the 75th and 90th percentile of RPN values were identified as failure modes needing necessary corrective action, respectively. Team 1 and Team 2 identified five and six failure modes needing urgent corrective action respectively, with two being commonly identified. Of the failure modes needing necessary corrective actions, about a third were commonly identified by both teams. These results show inconsistency in the outcome of the FMEA. To improve consistency, we recommend that FMEA is always carried out under the supervision of an experienced FMEA-facilitator and that the FMEA team has at least two members with competence in the analytical method to be validated. However, the FMEAs of both teams contained valuable information that was not identified by the other team, indicating that this inconsistency is not always a drawback. Copyright © 2010 Elsevier B.V. All rights reserved.

  8. Measuring Students' Writing Ability on a Computer-Analytic Developmental Scale: An Exploratory Validity Study

    ERIC Educational Resources Information Center

    Burdick, Hal; Swartz, Carl W.; Stenner, A. Jackson; Fitzgerald, Jill; Burdick, Don; Hanlon, Sean T.

    2013-01-01

    The purpose of the study was to explore the validity of a novel computer-analytic developmental scale, the Writing Ability Developmental Scale. On the whole, collective results supported the validity of the scale. It was sensitive to writing ability differences across grades and sensitive to within-grade variability as compared to human-rated…

  9. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance

    PubMed Central

    2015-01-01

    Introduction Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. Methods To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Results Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. Conclusion The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation. PMID:26517553

  10. LATUX: An Iterative Workflow for Designing, Validating, and Deploying Learning Analytics Visualizations

    ERIC Educational Resources Information Center

    Martinez-Maldonado, Roberto; Pardo, Abelardo; Mirriahi, Negin; Yacef, Kalina; Kay, Judy; Clayphan, Andrew

    2015-01-01

    Designing, validating, and deploying learning analytics tools for instructors or students is a challenge that requires techniques and methods from different disciplines, such as software engineering, human-computer interaction, computer graphics, educational design, and psychology. Whilst each has established its own design methodologies, we now…

  11. Chapter 16 - Predictive Analytics for Comprehensive Energy Systems State Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yingchen; Yang, Rui; Hodge, Brian S

    Energy sustainability is a subject of concern to many nations in the modern world. It is critical for electric power systems to diversify energy supply to include systems with different physical characteristics, such as wind energy, solar energy, electrochemical energy storage, thermal storage, bio-energy systems, geothermal, and ocean energy. Each system has its own range of control variables and targets. To be able to operate such a complex energy system, big-data analytics become critical to achieve the goal of predicting energy supplies and consumption patterns, assessing system operation conditions, and estimating system states - all providing situational awareness to powermore » system operators. This chapter presents data analytics and machine learning-based approaches to enable predictive situational awareness of the power systems.« less

  12. Analytical validation of quantitative immunohistochemical assays of tumor infiltrating lymphocyte biomarkers.

    PubMed

    Singh, U; Cui, Y; Dimaano, N; Mehta, S; Pruitt, S K; Yearley, J; Laterza, O F; Juco, J W; Dogdas, B

    2018-06-04

    Tumor infiltrating lymphocytes (TIL), especially T-cells, have both prognostic and therapeutic applications. The presence of CD8+ effector T-cells and the ratio of CD8+ cells to FOXP3+ regulatory T-cells have been used as biomarkers of disease prognosis to predict response to various immunotherapies. Blocking the interaction between inhibitory receptors on T-cells and their ligands with therapeutic antibodies including atezolizumab, nivolumab, pembrolizumab and tremelimumab increases the immune response against cancer cells and has shown significant improvement in clinical benefits and survival in several different tumor types. The improved clinical outcome is presumed to be associated with a higher tumor infiltration; therefore, it is thought that more accurate methods for measuring the amount of TIL could assist prognosis and predict treatment response. We have developed and validated quantitative immunohistochemistry (IHC) assays for CD3, CD8 and FOXP3 for immunophenotyping T-lymphocytes in tumor tissue. Various types of formalin fixed, paraffin embedded (FFPE) tumor tissues were immunolabeled with anti-CD3, anti-CD8 and anti-FOXP3 antibodies using an IHC autostainer. The tumor area of stained tissues, including the invasive margin of the tumor, was scored by a pathologist (visual scoring) and by computer-based quantitative image analysis. Two image analysis scores were obtained for the staining of each biomarker: the percent positive cells in the tumor area and positive cells/mm 2 tumor area. Comparison of visual vs. image analysis scoring methods using regression analysis showed high correlation and indicated that quantitative image analysis can be used to score the number of positive cells in IHC stained slides. To demonstrate that the IHC assays produce consistent results in normal daily testing, we evaluated the specificity, sensitivity and reproducibility of the IHC assays using both visual and image analysis scoring methods. We found that CD3, CD8 and

  13. Predicting Student Success using Analytics in Course Learning Management Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olama, Mohammed M; Thakur, Gautam; McNair, Wade

    Educational data analytics is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the student and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present the efforts carried out at Oak Ridge National Laboratory (ORNL) toward conducting predictive analytics to academic data collected from 2009 through 2013 and available in one of the most commonly used learning management systems,more » called Moodle. First, we have identified the data features useful for predicting student outcomes such as students scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total GPA at the same term they enrolled in the course. Then, Logistic Regression and Neural Network predictive models are used to identify students as early as possible that are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy.« less

  14. Predicting student success using analytics in course learning management systems

    NASA Astrophysics Data System (ADS)

    Olama, Mohammed M.; Thakur, Gautam; McNair, Allen W.; Sukumar, Sreenivas R.

    2014-05-01

    Educational data analytics is an emerging discipline, concerned with developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the student and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present the efforts carried out at Oak Ridge National Laboratory (ORNL) toward conducting predictive analytics to academic data collected from 2009 through 2013 and available in one of the most commonly used learning management systems, called Moodle. First, we have identified the data features useful for predicting student outcomes such as students' scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total GPA at the same term they enrolled in the course. Then, Logistic Regression and Neural Network predictive models are used to identify students as early as possible that are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. Numerical results are presented to evaluate and compare the performance of the developed models and their predictive accuracy.

  15. Reports of the AAAI 2009 Spring Symposia: Technosocial Predictive Analytics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.

    2009-10-01

    The Technosocial Predictive Analytics AAAI symposium was held at Stanford University, Stanford, CA, March 23-25, 2009. The goal of this symposium was to explore new methods for anticipatory analytical thinking that provide decision advantage through the integration of human and physical models. Special attention was also placed on how to leverage supporting disciplines to (a) facilitate the achievement of knowledge inputs, (b) improve the user experience, and (c) foster social intelligence through collaborative/competitive work.

  16. PARAMO: A Parallel Predictive Modeling Platform for Healthcare Analytic Research using Electronic Health Records

    PubMed Central

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R.; Stewart, Walter F.; Malin, Bradley; Sun, Jimeng

    2014-01-01

    Objective Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: 1) cohort construction, 2) feature construction, 3) cross-validation, 4) feature selection, and 5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. Methods To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which 1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, 2) schedules the tasks in a topological ordering of the graph, and 3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. Results We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3 hours in parallel compared to 9 days if running sequentially. Conclusion This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate

  17. PARAMO: a PARAllel predictive MOdeling platform for healthcare analytic research using electronic health records.

    PubMed

    Ng, Kenney; Ghoting, Amol; Steinhubl, Steven R; Stewart, Walter F; Malin, Bradley; Sun, Jimeng

    2014-04-01

    Healthcare analytics research increasingly involves the construction of predictive models for disease targets across varying patient cohorts using electronic health records (EHRs). To facilitate this process, it is critical to support a pipeline of tasks: (1) cohort construction, (2) feature construction, (3) cross-validation, (4) feature selection, and (5) classification. To develop an appropriate model, it is necessary to compare and refine models derived from a diversity of cohorts, patient-specific features, and statistical frameworks. The goal of this work is to develop and evaluate a predictive modeling platform that can be used to simplify and expedite this process for health data. To support this goal, we developed a PARAllel predictive MOdeling (PARAMO) platform which (1) constructs a dependency graph of tasks from specifications of predictive modeling pipelines, (2) schedules the tasks in a topological ordering of the graph, and (3) executes those tasks in parallel. We implemented this platform using Map-Reduce to enable independent tasks to run in parallel in a cluster computing environment. Different task scheduling preferences are also supported. We assess the performance of PARAMO on various workloads using three datasets derived from the EHR systems in place at Geisinger Health System and Vanderbilt University Medical Center and an anonymous longitudinal claims database. We demonstrate significant gains in computational efficiency against a standard approach. In particular, PARAMO can build 800 different models on a 300,000 patient data set in 3h in parallel compared to 9days if running sequentially. This work demonstrates that an efficient parallel predictive modeling platform can be developed for EHR data. This platform can facilitate large-scale modeling endeavors and speed-up the research workflow and reuse of health information. This platform is only a first step and provides the foundation for our ultimate goal of building analytic pipelines

  18. Scaling Student Success with Predictive Analytics: Reflections after Four Years in the Data Trenches

    ERIC Educational Resources Information Center

    Wagner, Ellen; Longanecker, David

    2016-01-01

    The metrics used in the US to track students do not include adults and part-time students. This has led to the development of a massive data initiative--the Predictive Analytics Reporting (PAR) framework--that uses predictive analytics to trace the progress of all types of students in the system. This development has allowed actionable,…

  19. Sustained prediction ability of net analyte preprocessing methods using reduced calibration sets. Theoretical and experimental study involving the spectrophotometric analysis of multicomponent mixtures.

    PubMed

    Goicoechea, H C; Olivieri, A C

    2001-07-01

    A newly developed multivariate method involving net analyte preprocessing (NAP) was tested using central composite calibration designs of progressively decreasing size regarding the multivariate simultaneous spectrophotometric determination of three active components (phenylephrine, diphenhydramine and naphazoline) and one excipient (methylparaben) in nasal solutions. Its performance was evaluated and compared with that of partial least-squares (PLS-1). Minimisation of the calibration predicted error sum of squares (PRESS) as a function of a moving spectral window helped to select appropriate working spectral ranges for both methods. The comparison of NAP and PLS results was carried out using two tests: (1) the elliptical joint confidence region for the slope and intercept of a predicted versus actual concentrations plot for a large validation set of samples and (2) the D-optimality criterion concerning the information content of the calibration data matrix. Extensive simulations and experimental validation showed that, unlike PLS, the NAP method is able to furnish highly satisfactory results when the calibration set is reduced from a full four-component central composite to a fractional central composite, as expected from the modelling requirements of net analyte based methods.

  20. Override the controversy: Analytic thinking predicts endorsement of evolution.

    PubMed

    Gervais, Will M

    2015-09-01

    Despite overwhelming scientific consensus, popular opinions regarding evolution are starkly divided. In the USA, for example, nearly one in three adults espouse a literal and recent divine creation account of human origins. Plausibly, resistance to scientific conclusions regarding the origins of species-like much resistance to other scientific conclusions (Bloom & Weisberg, 2007)-gains support from reliably developing intuitions. Intuitions about essentialism, teleology, agency, and order may combine to make creationism potentially more cognitively attractive than evolutionary concepts. However, dual process approaches to cognition recognize that people can often analytically override their intuitions. Two large studies (total N=1324) found consistent evidence that a tendency to engage analytic thinking predicted endorsement of evolution, even controlling for relevant demographic, attitudinal, and religious variables. Meanwhile, exposure to religion predicted reduced endorsement of evolution. Cognitive style is one factor among many affecting opinions on the origin of species. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect

  2. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    PubMed

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W

    2016-01-01

    External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  3. Predicting and explaining inflammation in Crohn's disease patients using predictive analytics methods and electronic medical record data.

    PubMed

    Reddy, Bhargava K; Delen, Dursun; Agrawal, Rupesh K

    2018-01-01

    Crohn's disease is among the chronic inflammatory bowel diseases that impact the gastrointestinal tract. Understanding and predicting the severity of inflammation in real-time settings is critical to disease management. Extant literature has primarily focused on studies that are conducted in clinical trial settings to investigate the impact of a drug treatment on the remission status of the disease. This research proposes an analytics methodology where three different types of prediction models are developed to predict and to explain the severity of inflammation in patients diagnosed with Crohn's disease. The results show that machine-learning-based analytic methods such as gradient boosting machines can predict the inflammation severity with a very high accuracy (area under the curve = 92.82%), followed by regularized regression and logistic regression. According to the findings, a combination of baseline laboratory parameters, patient demographic characteristics, and disease location are among the strongest predictors of inflammation severity in Crohn's disease patients.

  4. A two-dimensional analytical model and experimental validation of garter stitch knitted shape memory alloy actuator architecture

    NASA Astrophysics Data System (ADS)

    Abel, Julianna; Luntz, Jonathan; Brei, Diann

    2012-08-01

    Active knits are a unique architectural approach to meeting emerging smart structure needs for distributed high strain actuation with simultaneous force generation. This paper presents an analytical state-based model for predicting the actuation response of a shape memory alloy (SMA) garter knit textile. Garter knits generate significant contraction against moderate to large loads when heated, due to the continuous interlocked network of loops of SMA wire. For this knit architecture, the states of operation are defined on the basis of the thermal and mechanical loading of the textile, the resulting phase change of the SMA, and the load path followed to that state. Transitions between these operational states induce either stick or slip frictional forces depending upon the state and path, which affect the actuation response. A load-extension model of the textile is derived for each operational state using elastica theory and Euler-Bernoulli beam bending for the large deformations within a loop of wire based on the stress-strain behavior of the SMA material. This provides kinematic and kinetic relations which scale to form analytical transcendental expressions for the net actuation motion against an external load. This model was validated experimentally for an SMA garter knit textile over a range of applied forces with good correlation for both the load-extension behavior in each state as well as the net motion produced during the actuation cycle (250% recoverable strain and over 50% actuation). The two-dimensional analytical model of the garter stitch active knit provides the ability to predict the kinetic actuation performance, providing the basis for the design and synthesis of large stroke, large force distributed actuators that employ this novel architecture.

  5. Challenges in Rotorcraft Acoustic Flight Prediction and Validation

    NASA Technical Reports Server (NTRS)

    Boyd, D. Douglas, Jr.

    2003-01-01

    Challenges associated with rotorcraft acoustic flight prediction and validation are examined. First, an outline of a state-of-the-art rotorcraft aeroacoustic prediction methodology is presented. Components including rotorcraft aeromechanics, high resolution reconstruction, and rotorcraft acoustic prediction arc discussed. Next, to illustrate challenges and issues involved, a case study is presented in which an analysis of flight data from a specific XV-15 tiltrotor acoustic flight test is discussed in detail. Issues related to validation of methodologies using flight test data are discussed. Primary flight parameters such as velocity, altitude, and attitude are discussed and compared for repeated flight conditions. Other measured steady state flight conditions are examined for consistency and steadiness. A representative example prediction is presented and suggestions are made for future research.

  6. Predictive Analytics for Coordinated Optimization in Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Rui

    This talk will present NREL's work on developing predictive analytics that enables the optimal coordination of all the available resources in distribution systems to achieve the control objectives of system operators. Two projects will be presented. One focuses on developing short-term state forecasting-based optimal voltage regulation in distribution systems; and the other one focuses on actively engaging electricity consumers to benefit distribution system operations.

  7. Analytical validation of a psychiatric pharmacogenomic test.

    PubMed

    Jablonski, Michael R; King, Nina; Wang, Yongbao; Winner, Joel G; Watterson, Lucas R; Gunselman, Sandra; Dechairo, Bryan M

    2018-05-01

    The aim of this study was to validate the analytical performance of a combinatorial pharmacogenomics test designed to aid in the appropriate medication selection for neuropsychiatric conditions. Genomic DNA was isolated from buccal swabs. Twelve genes (65 variants/alleles) associated with psychotropic medication metabolism, side effects, and mechanisms of actions were evaluated by bead array, MALDI-TOF mass spectrometry, and/or capillary electrophoresis methods (GeneSight Psychotropic, Assurex Health, Inc.). The combinatorial pharmacogenomics test has a dynamic range of 2.5-20 ng/μl of input genomic DNA, with comparable performance for all assays included in the test. Both the precision and accuracy of the test were >99.9%, with individual gene components between 99.4 and 100%. This study demonstrates that the combinatorial pharmacogenomics test is robust and reproducible, making it suitable for clinical use.

  8. The Predictive Validity of Teacher Candidate Letters of Reference

    ERIC Educational Resources Information Center

    Mason, Richard W.; Schroeder, Mark P.

    2014-01-01

    Letters of reference are widely used as an essential part of the hiring process of newly licensed teachers. While the predictive validity of these letters of reference has been called into question it has never been empirically studied. The current study examined the predictive validity of the quality of letters of reference for forty-one student…

  9. Personality, Cognitive Style, Motivation, and Aptitude Predict Systematic Trends in Analytic Forecasting Behavior.

    PubMed

    Poore, Joshua C; Forlines, Clifton L; Miller, Sarah M; Regan, John R; Irvine, John M

    2014-12-01

    The decision sciences are increasingly challenged to advance methods for modeling analysts, accounting for both analytic strengths and weaknesses, to improve inferences taken from increasingly large and complex sources of data. We examine whether psychometric measures-personality, cognitive style, motivated cognition-predict analytic performance and whether psychometric measures are competitive with aptitude measures (i.e., SAT scores) as analyst sample selection criteria. A heterogeneous, national sample of 927 participants completed an extensive battery of psychometric measures and aptitude tests and was asked 129 geopolitical forecasting questions over the course of 1 year. Factor analysis reveals four dimensions among psychometric measures; dimensions characterized by differently motivated "top-down" cognitive styles predicted distinctive patterns in aptitude and forecasting behavior. These dimensions were not better predictors of forecasting accuracy than aptitude measures. However, multiple regression and mediation analysis reveals that these dimensions influenced forecasting accuracy primarily through bias in forecasting confidence. We also found that these facets were competitive with aptitude tests as forecast sampling criteria designed to mitigate biases in forecasting confidence while maximizing accuracy. These findings inform the understanding of individual difference dimensions at the intersection of analytic aptitude and demonstrate that they wield predictive power in applied, analytic domains.

  10. Analytical Approach Validation for the Spin-Stabilized Satellite Attitude

    NASA Technical Reports Server (NTRS)

    Zanardi, Maria Cecilia F. P. S.; Garcia, Roberta Veloso; Kuga, Helio Koiti

    2007-01-01

    An analytical approach for spin-stabilized spacecraft attitude prediction is presented for the influence of the residual magnetic torques and the satellite in an elliptical orbit. Assuming a quadripole model for the Earth s magnetic field, an analytical averaging method is applied to obtain the mean residual torque in every orbital period. The orbit mean anomaly is used to compute the average components of residual torque in the spacecraft body frame reference system. The theory is developed for time variations in the orbital elements, giving rise to many curvature integrals. It is observed that the residual magnetic torque does not have component along the spin axis. The inclusion of this torque on the rotational motion differential equations of a spin stabilized spacecraft yields conditions to derive an analytical solution. The solution shows that the residual torque does not affect the spin velocity magnitude, contributing only for the precession and the drift of the spin axis of the spacecraft. The theory developed has been applied to the Brazilian s spin stabilized satellites, which are quite appropriated for verification and comparison of the theory with the data generated and processed by the Satellite Control Center of Brazil National Research Institute. The results show the period that the analytical solution can be used to the attitude propagation, within the dispersion range of the attitude determination system performance of Satellite Control Center of Brazil National Research Institute.

  11. Comparison between numeric and approximate analytic solutions for the prediction of soil metal uptake by roots. Example of cadmium.

    PubMed

    Schneider, André; Lin, Zhongbing; Sterckeman, Thibault; Nguyen, Christophe

    2018-04-01

    The dissociation of metal complexes in the soil solution can increase the availability of metals for root uptake. When it is accounted for in models of bioavailability of soil metals, the number of partial differential equations (PDEs) increases and the computation time to numerically solve these equations may be problematic when a large number of simulations are required, for example for sensitivity analyses or when considering root architecture. This work presents analytical solutions for the set of PDEs describing the bioavailability of soil metals including the kinetics of complexation for three scenarios where the metal complex in solution was fully inert, fully labile, or partially labile. The analytical solutions are only valid i) at steady-state when the PDEs become ordinary differential equations, the transient phase being not covered, ii) when diffusion is the major mechanism of transport and therefore, when convection is negligible, iii) when there is no between-root competition. The formulation of the analytical solutions is for cylindrical geometry but the solutions rely on the spread of the depletion profile around the root, which was modelled assuming a planar geometry. The analytical solutions were evaluated by comparison with the corresponding PDEs for cadmium in the case of the French agricultural soils. Provided that convection was much lower than diffusion (Péclet's number<0.02), the cumulative uptakes calculated from the analytic solutions were in very good agreement with those calculated from the PDEs, even in the case of a partially labile complex. The analytic solutions can be used instead of the PDEs to predict root uptake of metals. The analytic solutions were also used to build an indicator of the contribution of a complex to the uptake of the metal by roots, which can be helpful to predict the effect of soluble organic matter on the bioavailability of soil metals. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Determining passive cooling limits in CPV using an analytical thermal model

    NASA Astrophysics Data System (ADS)

    Gualdi, Federico; Arenas, Osvaldo; Vossier, Alexis; Dollet, Alain; Aimez, Vincent; Arès, Richard

    2013-09-01

    We propose an original thermal analytical model aiming to predict the practical limits of passive cooling systems for high concentration photovoltaic modules. The analytical model is described and validated by comparison with a commercial 3D finite element model. The limiting performances of flat plate cooling systems in natural convection are then derived and discussed.

  13. The stroke impairment assessment set: its internal consistency and predictive validity.

    PubMed

    Tsuji, T; Liu, M; Sonoda, S; Domen, K; Chino, N

    2000-07-01

    To study the scale quality and predictive validity of the Stroke Impairment Assessment Set (SIAS) developed for stroke outcome research. Rasch analysis of the SIAS; stepwise multiple regression analysis to predict discharge functional independence measure (FIM) raw scores from demographic data, the SIAS scores, and the admission FIM scores; cross-validation of the prediction rule. Tertiary rehabilitation center in Japan. One hundred ninety stroke inpatients for the study of the scale quality and the predictive validity; a second sample of 116 stroke inpatients for the cross-validation study. Mean square fit statistics to study the degree of fit to the unidimensional model; logits to express item difficulties; discharge FIM scores for the study of predictive validity. The degree of misfit was acceptable except for the shoulder range of motion (ROM), pain, visuospatial function, and speech items; and the SIAS items could be arranged on a common unidimensional scale. The difficulty patterns were identical at admission and at discharge except for the deep tendon reflexes, ROM, and pain items. They were also similar for the right- and left-sided brain lesion groups except for the speech and visuospatial items. For the prediction of the discharge FIM scores, the independent variables selected were age, the SIAS total scores, and the admission FIM scores; and the adjusted R2 was .64 (p < .0001). Stability of the predictive equation was confirmed in the cross-validation sample (R2 = .68, p < .001). The unidimensionality of the SIAS was confirmed, and the SIAS total scores proved useful for stroke outcome prediction.

  14. Personality, Cognitive Style, Motivation, and Aptitude Predict Systematic Trends in Analytic Forecasting Behavior

    PubMed Central

    Forlines, Clifton L.; Miller, Sarah M.; Regan, John R.; Irvine, John M.

    2014-01-01

    The decision sciences are increasingly challenged to advance methods for modeling analysts, accounting for both analytic strengths and weaknesses, to improve inferences taken from increasingly large and complex sources of data. We examine whether psychometric measures—personality, cognitive style, motivated cognition—predict analytic performance and whether psychometric measures are competitive with aptitude measures (i.e., SAT scores) as analyst sample selection criteria. A heterogeneous, national sample of 927 participants completed an extensive battery of psychometric measures and aptitude tests and was asked 129 geopolitical forecasting questions over the course of 1 year. Factor analysis reveals four dimensions among psychometric measures; dimensions characterized by differently motivated “top-down” cognitive styles predicted distinctive patterns in aptitude and forecasting behavior. These dimensions were not better predictors of forecasting accuracy than aptitude measures. However, multiple regression and mediation analysis reveals that these dimensions influenced forecasting accuracy primarily through bias in forecasting confidence. We also found that these facets were competitive with aptitude tests as forecast sampling criteria designed to mitigate biases in forecasting confidence while maximizing accuracy. These findings inform the understanding of individual difference dimensions at the intersection of analytic aptitude and demonstrate that they wield predictive power in applied, analytic domains. PMID:25983670

  15. Analytic Validation of RNA In Situ Hybridization (RISH) for AR and AR-V7 Expression in Human Prostate Cancer

    PubMed Central

    Guedes, Liana B.; Morais, Carlos L.; Almutairi, Fawaz; Haffner, Michael C.; Zheng, Qizhi; Isaacs, John T.; Antonarakis, Emmanuel S.; Lu, Changxue; Tsai, Harrison; Luo, Jun; De Marzo, Angelo M.; Lotan, Tamara L.

    2016-01-01

    Purpose RNA expression of androgen receptor splice variants may be a biomarker of resistance to novel androgen deprivation therapies in castrate resistant prostate cancer (CRPC). We analytically validated an RNA in situ hybridization (RISH) assay for total AR and AR-V7 for use in formalin fixed paraffin embedded (FFPE) prostate tumors. Experimental Design We used prostate cell lines and xenografts to validate chromogenic RISH to detect RNA containing AR exon 1 (AR-E1, surrogate for total AR RNA species) and cryptic exon 3 (AR-CE3, surrogate for AR-V7 expression). RISH signals were quantified in FFPE primary tumors and CRPC specimens, comparing to known AR and AR-V7 status by immunohistochemistry and RT-PCR. Results The quantified RISH results correlated significantly with total AR and AR-V7 levels by RT-PCR in cell lines, xenografts and autopsy metastases. Both AR-E1 and AR-CE3 RISH signals were localized in nuclear punctae in addition to the expected cytoplasmic speckles. Compared to admixed benign glands, AR-E1 expression was significantly higher in primary tumor cells with a median fold increase of 3.0 and 1.4 in two independent cohorts (p<0.0001 and p=0.04, respectively). While AR-CE3 expression was detectable in primary prostatic tumors, levels were substantially higher in a subset of CRPC metastases and cell lines, and were correlated with AR-E1 expression. Conclusions RISH for AR-E1 and AR-CE3 is an analytically valid method to examine total AR and AR-V7 RNA levels in FFPE tissues. Future clinical validation studies are required to determine whether AR RISH is a prognostic or predictive biomarker in specific clinical contexts. PMID:27166397

  16. Validation of the SINDA/FLUINT code using several analytical solutions

    NASA Technical Reports Server (NTRS)

    Keller, John R.

    1995-01-01

    The Systems Improved Numerical Differencing Analyzer and Fluid Integrator (SINDA/FLUINT) code has often been used to determine the transient and steady-state response of various thermal and fluid flow networks. While this code is an often used design and analysis tool, the validation of this program has been limited to a few simple studies. For the current study, the SINDA/FLUINT code was compared to four different analytical solutions. The thermal analyzer portion of the code (conduction and radiative heat transfer, SINDA portion) was first compared to two separate solutions. The first comparison examined a semi-infinite slab with a periodic surface temperature boundary condition. Next, a small, uniform temperature object (lumped capacitance) was allowed to radiate to a fixed temperature sink. The fluid portion of the code (FLUINT) was also compared to two different analytical solutions. The first study examined a tank filling process by an ideal gas in which there is both control volume work and heat transfer. The final comparison considered the flow in a pipe joining two infinite reservoirs of pressure. The results of all these studies showed that for the situations examined here, the SINDA/FLUINT code was able to match the results of the analytical solutions.

  17. Validity and Measurement

    ERIC Educational Resources Information Center

    Maraun, Michael D.

    2012-01-01

    As illuminated forcefully by Professor Newton's provocative analytical and historical excursion, as long as tests are employed to practical ends (prediction, selection, etc.) there is little cause for the metatheoretic angst that occasions rounds of papers on the topic of validity. But then, also, there seems little need, within this context of…

  18. Comparison of analytical and predictive methods for water, protein, fat, sugar, and gross energy in marine mammal milk.

    PubMed

    Oftedal, O T; Eisert, R; Barrell, G K

    2014-01-01

    Mammalian milks may differ greatly in composition from cow milk, and these differences may affect the performance of analytical methods. High-fat, high-protein milks with a preponderance of oligosaccharides, such as those produced by many marine mammals, present a particular challenge. We compared the performance of several methods against reference procedures using Weddell seal (Leptonychotes weddellii) milk of highly varied composition (by reference methods: 27-63% water, 24-62% fat, 8-12% crude protein, 0.5-1.8% sugar). A microdrying step preparatory to carbon-hydrogen-nitrogen (CHN) gas analysis slightly underestimated water content and had a higher repeatability relative standard deviation (RSDr) than did reference oven drying at 100°C. Compared with a reference macro-Kjeldahl protein procedure, the CHN (or Dumas) combustion method had a somewhat higher RSDr (1.56 vs. 0.60%) but correlation between methods was high (0.992), means were not different (CHN: 17.2±0.46% dry matter basis; Kjeldahl 17.3±0.49% dry matter basis), there were no significant proportional or constant errors, and predictive performance was high. A carbon stoichiometric procedure based on CHN analysis failed to adequately predict fat (reference: Röse-Gottlieb method) or total sugar (reference: phenol-sulfuric acid method). Gross energy content, calculated from energetic factors and results from reference methods for fat, protein, and total sugar, accurately predicted gross energy as measured by bomb calorimetry. We conclude that the CHN (Dumas) combustion method and calculation of gross energy are acceptable analytical approaches for marine mammal milk, but fat and sugar require separate analysis by appropriate analytic methods and cannot be adequately estimated by carbon stoichiometry. Some other alternative methods-low-temperature drying for water determination; Bradford, Lowry, and biuret methods for protein; the Folch and the Bligh and Dyer methods for fat; and enzymatic and reducing

  19. Analytical relationships for prediction of the mechanical properties of additively manufactured porous biomaterials

    PubMed Central

    Hedayati, Reza

    2016-01-01

    Abstract Recent developments in additive manufacturing techniques have motivated an increasing number of researchers to study regular porous biomaterials that are based on repeating unit cells. The physical and mechanical properties of such porous biomaterials have therefore received increasing attention during recent years. One of the areas that have revived is analytical study of the mechanical behavior of regular porous biomaterials with the aim of deriving analytical relationships that could predict the relative density and mechanical properties of porous biomaterials, given the design and dimensions of their repeating unit cells. In this article, we review the analytical relationships that have been presented in the literature for predicting the relative density, elastic modulus, Poisson's ratio, yield stress, and buckling limit of regular porous structures based on various types of unit cells. The reviewed analytical relationships are used to compare the mechanical properties of porous biomaterials based on different types of unit cells. The major areas where the analytical relationships have improved during the recent years are discussed and suggestions are made for future research directions. © 2016 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 104A: 3164–3174, 2016. PMID:27502358

  20. Big data analytics : predicting traffic flow regimes from simulated connected vehicle messages using data analytics and machine learning.

    DOT National Transportation Integrated Search

    2016-12-25

    The key objectives of this study were to: 1. Develop advanced analytical techniques that make use of a dynamically configurable connected vehicle message protocol to predict traffic flow regimes in near-real time in a virtual environment and examine ...

  1. Analytic model for ultrasound energy receivers and their optimal electric loads II: Experimental validation

    NASA Astrophysics Data System (ADS)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2017-10-01

    In this paper, we verify the two optimal electric load concepts based on the zero reflection condition and on the power maximization approach for ultrasound energy receivers. We test a high loss 1-3 composite transducer, and find that the measurements agree very well with the predictions of the analytic model for plate transducers that we have developed previously. Additionally, we also confirm that the power maximization and zero reflection loads are very different when the losses in the receiver are high. Finally, we compare the optimal load predictions by the KLM and the analytic models with frequency dependent attenuation to evaluate the influence of the viscosity.

  2. Examining the Predictive Validity of NIH Peer Review Scores

    PubMed Central

    Lindner, Mark D.; Nakamura, Richard K.

    2015-01-01

    The predictive validity of peer review at the National Institutes of Health (NIH) has not yet been demonstrated empirically. It might be assumed that the most efficient and expedient test of the predictive validity of NIH peer review would be an examination of the correlation between percentile scores from peer review and bibliometric indices of the publications produced from funded projects. The present study used a large dataset to examine the rationale for such a study, to determine if it would satisfy the requirements for a test of predictive validity. The results show significant restriction of range in the applications selected for funding. Furthermore, those few applications that are funded with slightly worse peer review scores are not selected at random or representative of other applications in the same range. The funding institutes also negotiate with applicants to address issues identified during peer review. Therefore, the peer review scores assigned to the submitted applications, especially for those few funded applications with slightly worse peer review scores, do not reflect the changed and improved projects that are eventually funded. In addition, citation metrics by themselves are not valid or appropriate measures of scientific impact. The use of bibliometric indices on their own to measure scientific impact would likely increase the inefficiencies and problems with replicability already largely attributed to the current over-emphasis on bibliometric indices. Therefore, retrospective analyses of the correlation between percentile scores from peer review and bibliometric indices of the publications resulting from funded grant applications are not valid tests of the predictive validity of peer review at the NIH. PMID:26039440

  3. Individualized prediction of perineural invasion in colorectal cancer: development and validation of a radiomics prediction model.

    PubMed

    Huang, Yanqi; He, Lan; Dong, Di; Yang, Caiyun; Liang, Cuishan; Chen, Xin; Ma, Zelan; Huang, Xiaomei; Yao, Su; Liang, Changhong; Tian, Jie; Liu, Zaiyi

    2018-02-01

    To develop and validate a radiomics prediction model for individualized prediction of perineural invasion (PNI) in colorectal cancer (CRC). After computed tomography (CT) radiomics features extraction, a radiomics signature was constructed in derivation cohort (346 CRC patients). A prediction model was developed to integrate the radiomics signature and clinical candidate predictors [age, sex, tumor location, and carcinoembryonic antigen (CEA) level]. Apparent prediction performance was assessed. After internal validation, independent temporal validation (separate from the cohort used to build the model) was then conducted in 217 CRC patients. The final model was converted to an easy-to-use nomogram. The developed radiomics nomogram that integrated the radiomics signature and CEA level showed good calibration and discrimination performance [Harrell's concordance index (c-index): 0.817; 95% confidence interval (95% CI): 0.811-0.823]. Application of the nomogram in validation cohort gave a comparable calibration and discrimination (c-index: 0.803; 95% CI: 0.794-0.812). Integrating the radiomics signature and CEA level into a radiomics prediction model enables easy and effective risk assessment of PNI in CRC. This stratification of patients according to their PNI status may provide a basis for individualized auxiliary treatment.

  4. Predictive Analytics for Identification of Patients at Risk for QT Interval Prolongation - A Systematic Review.

    PubMed

    Tomaselli Muensterman, Elena; Tisdale, James E

    2018-06-08

    Prolongation of the heart rate-corrected QT (QTc) interval increases the risk for torsades de pointes (TdP), a potentially fatal arrhythmia. The likelihood of TdP is higher in patients with risk factors, which include female sex, older age, heart failure with reduced ejection fraction, hypokalemia, hypomagnesemia, concomitant administration of ≥ 2 QTc interval-prolonging medications, among others. Assessment and quantification of risk factors may facilitate prediction of patients at highest risk for developing QTc interval prolongation and TdP. Investigators have utilized the field of predictive analytics, which generates predictions using techniques including data mining, modeling, machine learning, and others, to develop methods of risk quantification and prediction of QTc interval prolongation. Predictive analytics have also been incorporated into clinical decision support (CDS) tools to alert clinicians regarding patients at increased risk of developing QTc interval prolongation. The objectives of this paper are to assess the effectiveness of predictive analytics for identification of patients at risk of drug-induced QTc interval prolongation, and to discuss the efficacy of incorporation of predictive analytics into CDS tools in clinical practice. A systematic review of English language articles (human subjects only) was performed, yielding 57 articles, with an additional 4 articles identified from other sources; a total of 10 articles were included in this review. Risk scores for QTc interval prolongation have been developed in various patient populations including those in cardiac intensive care units (ICUs) and in broader populations of hospitalized or health system patients. One group developed a risk score that includes information regarding genetic polymorphisms; this score significantly predicted TdP. Development of QTc interval prolongation risk prediction models and incorporation of these models into CDS tools reduces the risk of QTc interval

  5. Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai

    2013-01-01

    This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.

  6. How Predictive Analytics and Choice Architecture Can Improve Student Success

    ERIC Educational Resources Information Center

    Denley, Tristan

    2014-01-01

    This article explores the challenges that students face in navigating the curricular structure of post-secondary degree programs, and how predictive analytics and choice architecture can play a role. It examines Degree Compass, a course recommendation system that successfully pairs current students with the courses that best fit their talents and…

  7. Validation of a multi-analyte panel with cell-bound complement activation products for systemic lupus erythematosus.

    PubMed

    Dervieux, Thierry; Conklin, John; Ligayon, Jo-Anne; Wolover, Leilani; O'Malley, Tyler; Alexander, Roberta Vezza; Weinstein, Arthur; Ibarra, Claudia A

    2017-07-01

    We describe the analytical validation of an assay panel intended to assist clinicians with the diagnosis of systemic lupus erythematosus (SLE). The multi-analyte panel includes quantitative assessment of complement activation and measurement of autoantibodies. The levels of the complement split product C4d bound to erythrocytes (EC4d) and B-lymphocytes (BC4d) (expressed as mean fluorescence intensity [MFI]) are measured by quantitative flow cytometry, while autoantibodies (inclusive of antinuclear and anti-double stranded DNA antibodies) are determined by immunoassays. Results of the multi-analyte panel are reported as positive or negative based on a 2-tiered index score. Post-phlebotomy stability of EC4d and BC4d in EDTA-anticoagulated blood is determined using specimens collected from patients with SLE and normal donors. Three-level C4 coated positive beads are run daily as controls. Analytical validity is reported using intra-day and inter-day coefficient of variation (CV). EC4d and BC4d are stable for 2days at ambient temperature and for 4days at 4°C post-phlebotomy. Median intra-day and inter-day CV range from 2.9% to 7.8% (n=30) and 7.3% to 12.4% (n=66), respectively. The 2-tiered index score is reproducible over 4 consecutive daysupon storage of blood at 4°C. A total of 2,888 three-level quality control data were collected from 6 flow cytometers with an overall failure rate below 3%. Median EC4d level is 6 net MFI (Interquartile [IQ] range 4-9 net MFI) and median BC4d is 18 net MFI (IQ range 13-27 net MFI) among 86,852 specimens submitted for testing. The incidence of 2-tiered positive test results is 13.4%. We have established the analytical validity of a multi-analyte assay panel for SLE. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Behavior-Based Budget Management Using Predictive Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Troy Hiltbrand

    Historically, the mechanisms to perform forecasting have primarily used two common factors as a basis for future predictions: time and money. While time and money are very important aspects of determining future budgetary spend patterns, organizations represent a complex system of unique individuals with a myriad of associated behaviors and all of these behaviors have bearing on how budget is utilized. When looking to forecasted budgets, it becomes a guessing game about how budget managers will behave under a given set of conditions. This becomes relatively messy when human nature is introduced, as different managers will react very differently undermore » similar circumstances. While one manager becomes ultra conservative during periods of financial austerity, another might be un-phased and continue to spend as they have in the past. Both might revert into a state of budgetary protectionism masking what is truly happening at a budget holder level, in order to keep as much budget and influence as possible while at the same time sacrificing the greater good of the organization. To more accurately predict future outcomes, the models should consider both time and money and other behavioral patterns that have been observed across the organization. The field of predictive analytics is poised to provide the tools and methodologies needed for organizations to do just this: capture and leverage behaviors of the past to predict the future.« less

  9. Developing and validating a model to predict the success of an IHCS implementation: the Readiness for Implementation Model.

    PubMed

    Wen, Kuang-Yi; Gustafson, David H; Hawkins, Robert P; Brennan, Patricia F; Dinauer, Susan; Johnson, Pauley R; Siegler, Tracy

    2010-01-01

    To develop and validate the Readiness for Implementation Model (RIM). This model predicts a healthcare organization's potential for success in implementing an interactive health communication system (IHCS). The model consists of seven weighted factors, with each factor containing five to seven elements. Two decision-analytic approaches, self-explicated and conjoint analysis, were used to measure the weights of the RIM with a sample of 410 experts. The RIM model with weights was then validated in a prospective study of 25 IHCS implementation cases. Orthogonal main effects design was used to develop 700 conjoint-analysis profiles, which varied on seven factors. Each of the 410 experts rated the importance and desirability of the factors and their levels, as well as a set of 10 different profiles. For the prospective 25-case validation, three time-repeated measures of the RIM scores were collected for comparison with the implementation outcomes. Two of the seven factors, 'organizational motivation' and 'meeting user needs,' were found to be most important in predicting implementation readiness. No statistically significant difference was found in the predictive validity of the two approaches (self-explicated and conjoint analysis). The RIM was a better predictor for the 1-year implementation outcome than the half-year outcome. The expert sample, the order of the survey tasks, the additive model, and basing the RIM cut-off score on experience are possible limitations of the study. The RIM needs to be empirically evaluated in institutions adopting IHCS and sustaining the system in the long term.

  10. Analytical relationships for prediction of the mechanical properties of additively manufactured porous biomaterials.

    PubMed

    Zadpoor, Amir Abbas; Hedayati, Reza

    2016-12-01

    Recent developments in additive manufacturing techniques have motivated an increasing number of researchers to study regular porous biomaterials that are based on repeating unit cells. The physical and mechanical properties of such porous biomaterials have therefore received increasing attention during recent years. One of the areas that have revived is analytical study of the mechanical behavior of regular porous biomaterials with the aim of deriving analytical relationships that could predict the relative density and mechanical properties of porous biomaterials, given the design and dimensions of their repeating unit cells. In this article, we review the analytical relationships that have been presented in the literature for predicting the relative density, elastic modulus, Poisson's ratio, yield stress, and buckling limit of regular porous structures based on various types of unit cells. The reviewed analytical relationships are used to compare the mechanical properties of porous biomaterials based on different types of unit cells. The major areas where the analytical relationships have improved during the recent years are discussed and suggestions are made for future research directions. © 2016 Wiley Periodicals, Inc. J Biomed Mater Res Part A: 104A: 3164-3174, 2016. © 2016 The Authors Journal of Biomedical Materials Research Part A Published by Wiley Periodicals, Inc.

  11. The Predictive Validity of Projective Measures.

    ERIC Educational Resources Information Center

    Suinn, Richard M.; Oskamp, Stuart

    Written for use by clinical practitioners as well as psychological researchers, this book surveys recent literature (1950-1965) on projective test validity by reviewing and critically evaluating studies which shed light on what may reliably be predicted from projective test results. Two major instruments are covered: the Rorschach and the Thematic…

  12. Disentangling the Predictive Validity of High School Grades for Academic Success in University

    ERIC Educational Resources Information Center

    Vulperhorst, Jonne; Lutz, Christel; de Kleijn, Renske; van Tartwijk, Jan

    2018-01-01

    To refine selective admission models, we investigate which measure of prior achievement has the best predictive validity for academic success in university. We compare the predictive validity of three core high school subjects to the predictive validity of high school grade point average (GPA) for academic achievement in a liberal arts university…

  13. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekechukwu, A.

    This document proposes to provide a listing of available sources which can be used to validate analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers, and books reviewed is given in Appendix 1. Available validation documents and guides are listed in the appendix; each has a brief description of application and use. In the referenced sources, there are varying approaches to validation and varying descriptions of validation at different stages in method development. This discussion focuses onmore » validation and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all documents were published in English.« less

  14. Comparative Predictive Validity of the New MCAT Using Different Admissions Criteria.

    ERIC Educational Resources Information Center

    Golmon, Melton E.; Berry, Charles A.

    1981-01-01

    New Medical College Admission Test (MCAT) scores and undergraduate academic achievement were examined for their validity in predicting the performance of two select student populations at Northwestern University Medical School. The data support the hypothesis that New MCAT scores possess substantial predictive validity. (Author/MLW)

  15. Validation of behave fire behavior predictions in oak savannas

    USGS Publications Warehouse

    Grabner, Keith W.; Dwyer, John; Cutter, Bruce E.

    1997-01-01

    Prescribed fire is a valuable tool in the restoration and management of oak savannas. BEHAVE, a fire behavior prediction system developed by the United States Forest Service, can be a useful tool when managing oak savannas with prescribed fire. BEHAVE predictions of fire rate-of-spread and flame length were validated using four standardized fuel models: Fuel Model 1 (short grass), Fuel Model 2 (timber and grass), Fuel Model 3 (tall grass), and Fuel Model 9 (hardwood litter). Also, a customized oak savanna fuel model (COSFM) was created and validated. Results indicate that standardized fuel model 2 and the COSFM reliably estimate mean rate-of-spread (MROS). The COSFM did not appreciably reduce MROS variation when compared to fuel model 2. Fuel models 1, 3, and 9 did not reliably predict MROS. Neither the standardized fuel models nor the COSFM adequately predicted flame lengths. We concluded that standardized fuel model 2 should be used with BEHAVE when predicting fire rates-of-spread in established oak savannas.

  16. Analytical model for local scour prediction around hydrokinetic turbine foundations

    NASA Astrophysics Data System (ADS)

    Musa, M.; Heisel, M.; Hill, C.; Guala, M.

    2017-12-01

    Marine and Hydrokinetic renewable energy is an emerging sustainable and secure technology which produces clean energy harnessing water currents from mostly tidal and fluvial waterways. Hydrokinetic turbines are typically anchored at the bottom of the channel, which can be erodible or non-erodible. Recent experiments demonstrated the interactions between operating turbines and an erodible surface with sediment transport, resulting in a remarkable localized erosion-deposition pattern significantly larger than those observed by static in-river construction such as bridge piers, etc. Predicting local scour geometry at the base of hydrokinetic devices is extremely important during foundation design, installation, operation, and maintenance (IO&M), and long-term structural integrity. An analytical modeling framework is proposed applying the phenomenological theory of turbulence to the flow structures that promote the scouring process at the base of a turbine. The evolution of scour is directly linked to device operating conditions through the turbine drag force, which is inferred to locally dictate the energy dissipation rate in the scour region. The predictive model is validated using experimental data obtained at the University of Minnesota's St. Anthony Falls Laboratory (SAFL), covering two sediment mobility regimes (clear water and live bed), different turbine designs, hydraulic parameters, grain size distribution and bedform types. The model is applied to a potential prototype scale deployment in the lower Mississippi River, demonstrating its practical relevance and endorsing the feasibility of hydrokinetic energy power plants in large sandy rivers. Multi-turbine deployments are further studied experimentally by monitoring both local and non-local geomorphic effects introduced by a twelve turbine staggered array model installed in a wide channel at SAFL. Local scour behind each turbine is well captured by the theoretical predictive model. However, multi

  17. Teachers' Grade Assignment and the Predictive Validity of Criterion-Referenced Grades

    ERIC Educational Resources Information Center

    Thorsen, Cecilia; Cliffordson, Christina

    2012-01-01

    Research has found that grades are the most valid instruments for predicting educational success. Why grades have better predictive validity than, for example, standardized tests is not yet fully understood. One possible explanation is that grades reflect not only subject-specific knowledge and skills but also individual differences in other…

  18. Early Prediction of Intensive Care Unit-Acquired Weakness: A Multicenter External Validation Study.

    PubMed

    Witteveen, Esther; Wieske, Luuk; Sommers, Juultje; Spijkstra, Jan-Jaap; de Waard, Monique C; Endeman, Henrik; Rijkenberg, Saskia; de Ruijter, Wouter; Sleeswijk, Mengalvio; Verhamme, Camiel; Schultz, Marcus J; van Schaik, Ivo N; Horn, Janneke

    2018-01-01

    An early diagnosis of intensive care unit-acquired weakness (ICU-AW) is often not possible due to impaired consciousness. To avoid a diagnostic delay, we previously developed a prediction model, based on single-center data from 212 patients (development cohort), to predict ICU-AW at 2 days after ICU admission. The objective of this study was to investigate the external validity of the original prediction model in a new, multicenter cohort and, if necessary, to update the model. Newly admitted ICU patients who were mechanically ventilated at 48 hours after ICU admission were included. Predictors were prospectively recorded, and the outcome ICU-AW was defined by an average Medical Research Council score <4. In the validation cohort, consisting of 349 patients, we analyzed performance of the original prediction model by assessment of calibration and discrimination. Additionally, we updated the model in this validation cohort. Finally, we evaluated a new prediction model based on all patients of the development and validation cohort. Of 349 analyzed patients in the validation cohort, 190 (54%) developed ICU-AW. Both model calibration and discrimination of the original model were poor in the validation cohort. The area under the receiver operating characteristics curve (AUC-ROC) was 0.60 (95% confidence interval [CI]: 0.54-0.66). Model updating methods improved calibration but not discrimination. The new prediction model, based on all patients of the development and validation cohort (total of 536 patients) had a fair discrimination, AUC-ROC: 0.70 (95% CI: 0.66-0.75). The previously developed prediction model for ICU-AW showed poor performance in a new independent multicenter validation cohort. Model updating methods improved calibration but not discrimination. The newly derived prediction model showed fair discrimination. This indicates that early prediction of ICU-AW is still challenging and needs further attention.

  19. Validity of Integrity Tests for Predicting Drug and Alcohol Abuse

    DTIC Science & Technology

    1993-08-31

    Wiinkler and Sheridan (1989) found that employees who entered employee assistance programs for treating drug addiction were more likely be absent...August 31, 1993 Final 4. TITLE AND SUBTITLE S. FUNDING NUMBERS Validity of Integrity Tests for Predicting Drug and Alcohol Abuse C No. N00014-92-J...words) This research used psychometric meta-analysis (Hunter & Schmidt, 1990b) to examine the validity of integrity tests for predicting drug and

  20. Analytical validation of a new point-of-care assay for serum amyloid A in horses.

    PubMed

    Schwartz, D; Pusterla, N; Jacobsen, S; Christopher, M M

    2018-01-17

    Serum amyloid A (SAA) is a major acute phase protein in horses. A new point-of-care (POC) test for SAA (Stablelab) is available, but studies evaluating its analytical accuracy are lacking. To evaluate the analytical performance of the SAA POC test by 1) determining linearity and precision, 2) comparing results in whole blood with those in serum or plasma, and 3) comparing POC results with those obtained using a previously validated turbidimetric immunoassay (TIA). Assay validation. Analytical validation of the POC test was done in accordance with American Society of Veterinary Clinical Pathology guidelines using residual equine serum/plasma and whole blood samples from the Clinical Pathology Laboratory at the University of California-Davis. A TIA was used as the reference method. We also evaluated the effect of haematocrit (HCT). The POC test was linear for SAA concentrations of up to at least 1000 μg/mL (r = 0.991). Intra-assay CVs were 13, 18 and 15% at high (782 μg/mL), intermediate (116 μg/mL) and low (64 μg/mL) concentrations. Inter-assay (inter-batch) CVs were 45, 14 and 15% at high (1372 μg/mL), intermediate (140 μg/mL) and low (56 μg/mL) concentrations. SAA results in whole blood were significantly lower than those in serum/plasma (P = 0.0002), but were positively correlated (r = 0.908) and not affected by HCT (P = 0.261); proportional negative bias was observed in samples with SAA>500 μg/mL. The difference between methods exceeded the 95% confidence interval of the combined imprecision of both methods (15%). Analytical validation could not be performed in whole blood, the sample most likely to be used stall side. The POC test has acceptable accuracy and precision in equine serum/plasma with SAA concentrations of up to at least 1000 μg/mL. Low inter-batch precision at high concentrations may affect serial measurements, and the use of the same test batch and sample type (serum/plasma or whole blood) is recommended. Comparison of results between the

  1. Analytical Modeling and Performance Prediction of Remanufactured Gearbox Components

    NASA Astrophysics Data System (ADS)

    Pulikollu, Raja V.; Bolander, Nathan; Vijayakar, Sandeep; Spies, Matthew D.

    Gearbox components operate in extreme environments, often leading to premature removal or overhaul. Though worn or damaged, these components still have the ability to function given the appropriate remanufacturing processes are deployed. Doing so reduces a significant amount of resources (time, materials, energy, manpower) otherwise required to produce a replacement part. Unfortunately, current design and analysis approaches require extensive testing and evaluation to validate the effectiveness and safety of a component that has been used in the field then processed outside of original OEM specification. To test all possible combination of component coupled with various levels of potential damage repaired through various options of processing would be an expensive and time consuming feat, thus prohibiting a broad deployment of remanufacturing processes across industry. However, such evaluation and validation can occur through Integrated Computational Materials Engineering (ICME) modeling and simulation. Sentient developed a microstructure-based component life prediction (CLP) tool to quantify and assist gearbox components remanufacturing process. This was achieved by modeling the design-manufacturing-microstructure-property relationship. The CLP tool assists in remanufacturing of high value, high demand rotorcraft, automotive and wind turbine gears and bearings. This paper summarizes the CLP models development, and validation efforts by comparing the simulation results with rotorcraft spiral bevel gear physical test data. CLP analyzes gear components and systems for safety, longevity, reliability and cost by predicting (1) New gearbox component performance, and optimal time-to-remanufacture (2) Qualification of used gearbox components for remanufacturing process (3) Predicting the remanufactured component performance.

  2. Validation of the enthalpy method by means of analytical solution

    NASA Astrophysics Data System (ADS)

    Kleiner, Thomas; Rückamp, Martin; Bondzio, Johannes; Humbert, Angelika

    2014-05-01

    Numerical simulations moved in the recent year(s) from describing the cold-temperate transition surface (CTS) towards an enthalpy description, which allows avoiding incorporating a singular surface inside the model (Aschwanden et al., 2012). In Enthalpy methods the CTS is represented as a level set of the enthalpy state variable. This method has several numerical and practical advantages (e.g. representation of the full energy by one scalar field, no restriction to topology and shape of the CTS). The proposed method is rather new in glaciology and to our knowledge not verified and validated against analytical solutions. Unfortunately we are still lacking analytical solutions for sufficiently complex thermo-mechanically coupled polythermal ice flow. However, we present two experiments to test the implementation of the enthalpy equation and corresponding boundary conditions. The first experiment tests particularly the functionality of the boundary condition scheme and the corresponding basal melt rate calculation. Dependent on the different thermal situations that occur at the base, the numerical code may have to switch to another boundary type (from Neuman to Dirichlet or vice versa). The main idea of this set-up is to test the reversibility during transients. A former cold ice body that run through a warmer period with an associated built up of a liquid water layer at the base must be able to return to its initial steady state. Since we impose several assumptions on the experiment design analytical solutions can be formulated for different quantities during distinct stages of the simulation. The second experiment tests the positioning of the internal CTS in a parallel-sided polythermal slab. We compare our simulation results to the analytical solution proposed by Greve and Blatter (2009). Results from three different ice flow-models (COMIce, ISSM, TIMFD3) are presented.

  3. Analytical Modeling for Mechanical Strength Prediction with Raman Spectroscopy and Fractured Surface Morphology of Novel Coconut Shell Powder Reinforced: Epoxy Composites

    NASA Astrophysics Data System (ADS)

    Singh, Savita; Singh, Alok; Sharma, Sudhir Kumar

    2017-06-01

    In this paper, an analytical modeling and prediction of tensile and flexural strength of three dimensional micro-scaled novel coconut shell powder (CSP) reinforced epoxy polymer composites have been reported. The novel CSP has a specific mixing ratio of different coconut shell particle size. A comparison is made between obtained experimental strength and modified Guth model. The result shows a strong evidence for non-validation of modified Guth model for strength prediction. Consequently, a constitutive modeled equation named Singh model has been developed to predict the tensile and flexural strength of this novel CSP reinforced epoxy composite. Moreover, high resolution Raman spectrum shows that 40 % CSP reinforced epoxy composite has high dielectric constant to become an alternative material for capacitance whereas fractured surface morphology revealed that a strong bonding between novel CSP and epoxy polymer for the application as light weight composite materials in engineering.

  4. Functionality of empirical model-based predictive analytics for the early detection of hemodynamic instabilty.

    PubMed

    Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C

    2014-01-01

    Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patient’s pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (“SBM”), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or “QCP”) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patient’s physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patient’s condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the

  5. Predicting child maltreatment: A meta-analysis of the predictive validity of risk assessment instruments.

    PubMed

    van der Put, Claudia E; Assink, Mark; Boekhout van Solinge, Noëlle F

    2017-11-01

    Risk assessment is crucial in preventing child maltreatment since it can identify high-risk cases in need of child protection intervention. Despite widespread use of risk assessment instruments in child welfare, it is unknown how well these instruments predict maltreatment and what instrument characteristics are associated with higher levels of predictive validity. Therefore, a multilevel meta-analysis was conducted to examine the predictive accuracy of (characteristics of) risk assessment instruments. A literature search yielded 30 independent studies (N=87,329) examining the predictive validity of 27 different risk assessment instruments. From these studies, 67 effect sizes could be extracted. Overall, a medium significant effect was found (AUC=0.681), indicating a moderate predictive accuracy. Moderator analyses revealed that onset of maltreatment can be better predicted than recurrence of maltreatment, which is a promising finding for early detection and prevention of child maltreatment. In addition, actuarial instruments were found to outperform clinical instruments. To bring risk and needs assessment in child welfare to a higher level, actuarial instruments should be further developed and strengthened by distinguishing risk assessment from needs assessment and by integrating risk assessment with case management. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Bridging the Gap between Human Judgment and Automated Reasoning in Predictive Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.; Riensche, Roderick M.; Unwin, Stephen D.

    2010-06-07

    Events occur daily that impact the health, security and sustainable growth of our society. If we are to address the challenges that emerge from these events, anticipatory reasoning has to become an everyday activity. Strong advances have been made in using integrated modeling for analysis and decision making. However, a wider impact of predictive analytics is currently hindered by the lack of systematic methods for integrating predictive inferences from computer models with human judgment. In this paper, we present a predictive analytics approach that supports anticipatory analysis and decision-making through a concerted reasoning effort that interleaves human judgment and automatedmore » inferences. We describe a systematic methodology for integrating modeling algorithms within a serious gaming environment in which role-playing by human agents provides updates to model nodes and the ensuing model outcomes in turn influence the behavior of the human players. The approach ensures a strong functional partnership between human players and computer models while maintaining a high degree of independence and greatly facilitating the connection between model and game structures.« less

  7. Fluid dynamics of coarctation of the aorta: analytical solution, in vitro validation and in vivo evaluation

    NASA Astrophysics Data System (ADS)

    Keshavarz-Motamed, Zahra

    2015-11-01

    Coarctation of the aorta (COA) is a congenital heart disease corresponding to a narrowing in the aorta. Cardiac catheterization is considered to be the reference standard for definitive evaluation of COA severity, based on the peak-to-peak trans-coarctation pressure gradient (PtoP TCPG) and instantaneous systolic value of trans-COA pressure gradient (TCPG). However, invasive cardiac catheterization may carry high risks given that undergoing multiple follow-up cardiac catheterizations in patients with COA is common. The objective of this study is to present an analytical description of the COA that estimates PtoP TCPG and TCPG without a need for high risk invasive data collection. Coupled Navier-Stokes and elastic deformation equations were solved analytically to estimate TCPG and PtoP TCPG. The results were validated against data measured in vitro (e.g., 90% COA: TCPG: root mean squared error (RMSE) = 3.93 mmHg; PtoP TCPG: RMSE = 7.9 mmHg). Moreover, the estimated PtoP TCPG resulted from the suggested analytical description was validated using clinical data in twenty patients with COA (maximum RMSE: 8.3 mmHg). Very good correlation and concordance were found between TCPG and PtoP TCPG obtained from the analytical formulation and in vitro and in vivo data. The suggested methodology can be considered as an alternative to cardiac catheterization and can help preventing its risks.

  8. External validation of the Cairns Prediction Model (CPM) to predict conversion from laparoscopic to open cholecystectomy.

    PubMed

    Hu, Alan Shiun Yew; Donohue, Peter O'; Gunnarsson, Ronny K; de Costa, Alan

    2018-03-14

    Valid and user-friendly prediction models for conversion to open cholecystectomy allow for proper planning prior to surgery. The Cairns Prediction Model (CPM) has been in use clinically in the original study site for the past three years, but has not been tested at other sites. A retrospective, single-centred study collected ultrasonic measurements and clinical variables alongside with conversion status from consecutive patients who underwent laparoscopic cholecystectomy from 2013 to 2016 in The Townsville Hospital, North Queensland, Australia. An area under the curve (AUC) was calculated to externally validate of the CPM. Conversion was necessary in 43 (4.2%) out of 1035 patients. External validation showed an area under the curve of 0.87 (95% CI 0.82-0.93, p = 1.1 × 10 -14 ). In comparison with most previously published models, which have an AUC of approximately 0.80 or less, the CPM has the highest AUC of all published prediction models both for internal and external validation. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.

  9. Hybrid experimental/analytical models of structural dynamics - Creation and use for predictions

    NASA Technical Reports Server (NTRS)

    Balmes, Etienne

    1993-01-01

    An original complete methodology for the construction of predictive models of damped structural vibrations is introduced. A consistent definition of normal and complex modes is given which leads to an original method to accurately identify non-proportionally damped normal mode models. A new method to create predictive hybrid experimental/analytical models of damped structures is introduced, and the ability of hybrid models to predict the response to system configuration changes is discussed. Finally a critical review of the overall methodology is made by application to the case of the MIT/SERC interferometer testbed.

  10. Analytical validation of a next generation sequencing liquid biopsy assay for high sensitivity broad molecular profiling.

    PubMed

    Plagnol, Vincent; Woodhouse, Samuel; Howarth, Karen; Lensing, Stefanie; Smith, Matt; Epstein, Michael; Madi, Mikidache; Smalley, Sarah; Leroy, Catherine; Hinton, Jonathan; de Kievit, Frank; Musgrave-Brown, Esther; Herd, Colin; Baker-Neblett, Katherine; Brennan, Will; Dimitrov, Peter; Campbell, Nathan; Morris, Clive; Rosenfeld, Nitzan; Clark, James; Gale, Davina; Platt, Jamie; Calaway, John; Jones, Greg; Forshew, Tim

    2018-01-01

    Circulating tumor DNA (ctDNA) analysis is being incorporated into cancer care; notably in profiling patients to guide treatment decisions. Responses to targeted therapies have been observed in patients with actionable mutations detected in plasma DNA at variant allele fractions (VAFs) below 0.5%. Highly sensitive methods are therefore required for optimal clinical use. To enable objective assessment of assay performance, detailed analytical validation is required. We developed the InVisionFirst™ assay, an assay based on enhanced tagged amplicon sequencing (eTAm-Seq™) technology to profile 36 genes commonly mutated in non-small cell lung cancer (NSCLC) and other cancer types for actionable genomic alterations in cell-free DNA. The assay has been developed to detect point mutations, indels, amplifications and gene fusions that commonly occur in NSCLC. For analytical validation, two 10mL blood tubes were collected from NSCLC patients and healthy volunteer donors. In addition, contrived samples were used to represent a wide spectrum of genetic aberrations and VAFs. Samples were analyzed by multiple operators, at different times and using different reagent Lots. Results were compared with digital PCR (dPCR). The InVisionFirst assay demonstrated an excellent limit of detection, with 99.48% sensitivity for SNVs present at VAF range 0.25%-0.33%, 92.46% sensitivity for indels at 0.25% VAF and a high rate of detection at lower frequencies while retaining high specificity (99.9997% per base). The assay also detected ALK and ROS1 gene fusions, and DNA amplifications in ERBB2, FGFR1, MET and EGFR with high sensitivity and specificity. Comparison between the InVisionFirst assay and dPCR in a series of cancer patients showed high concordance. This analytical validation demonstrated that the InVisionFirst assay is highly sensitive, specific and robust, and meets analytical requirements for clinical applications.

  11. Analytical validation of a next generation sequencing liquid biopsy assay for high sensitivity broad molecular profiling

    PubMed Central

    Howarth, Karen; Lensing, Stefanie; Smith, Matt; Epstein, Michael; Madi, Mikidache; Smalley, Sarah; Leroy, Catherine; Hinton, Jonathan; de Kievit, Frank; Musgrave-Brown, Esther; Herd, Colin; Baker-Neblett, Katherine; Brennan, Will; Dimitrov, Peter; Campbell, Nathan; Morris, Clive; Rosenfeld, Nitzan; Clark, James; Gale, Davina; Platt, Jamie; Calaway, John; Jones, Greg

    2018-01-01

    Circulating tumor DNA (ctDNA) analysis is being incorporated into cancer care; notably in profiling patients to guide treatment decisions. Responses to targeted therapies have been observed in patients with actionable mutations detected in plasma DNA at variant allele fractions (VAFs) below 0.5%. Highly sensitive methods are therefore required for optimal clinical use. To enable objective assessment of assay performance, detailed analytical validation is required. We developed the InVisionFirst™ assay, an assay based on enhanced tagged amplicon sequencing (eTAm-Seq™) technology to profile 36 genes commonly mutated in non-small cell lung cancer (NSCLC) and other cancer types for actionable genomic alterations in cell-free DNA. The assay has been developed to detect point mutations, indels, amplifications and gene fusions that commonly occur in NSCLC. For analytical validation, two 10mL blood tubes were collected from NSCLC patients and healthy volunteer donors. In addition, contrived samples were used to represent a wide spectrum of genetic aberrations and VAFs. Samples were analyzed by multiple operators, at different times and using different reagent Lots. Results were compared with digital PCR (dPCR). The InVisionFirst assay demonstrated an excellent limit of detection, with 99.48% sensitivity for SNVs present at VAF range 0.25%-0.33%, 92.46% sensitivity for indels at 0.25% VAF and a high rate of detection at lower frequencies while retaining high specificity (99.9997% per base). The assay also detected ALK and ROS1 gene fusions, and DNA amplifications in ERBB2, FGFR1, MET and EGFR with high sensitivity and specificity. Comparison between the InVisionFirst assay and dPCR in a series of cancer patients showed high concordance. This analytical validation demonstrated that the InVisionFirst assay is highly sensitive, specific and robust, and meets analytical requirements for clinical applications. PMID:29543828

  12. Empirical testing of an analytical model predicting electrical isolation of photovoltaic models

    NASA Astrophysics Data System (ADS)

    Garcia, A., III; Minning, C. P.; Cuddihy, E. F.

    A major design requirement for photovoltaic modules is that the encapsulation system be capable of withstanding large DC potentials without electrical breakdown. Presented is a simple analytical model which can be used to estimate material thickness to meet this requirement for a candidate encapsulation system or to predict the breakdown voltage of an existing module design. A series of electrical tests to verify the model are described in detail. The results of these verification tests confirmed the utility of the analytical model for preliminary design of photovoltaic modules.

  13. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekechukwu, A

    Method validation is the process of evaluating whether an analytical method is acceptable for its intended purpose. For pharmaceutical methods, guidelines from the United States Pharmacopeia (USP), International Conference on Harmonisation (ICH), and the United States Food and Drug Administration (USFDA) provide a framework for performing such valications. In general, methods for regulatory compliance must include studies on specificity, linearity, accuracy, precision, range, detection limit, quantitation limit, and robustness. Elements of these guidelines are readily adapted to the issue of validation for beryllium sampling and analysis. This document provides a listing of available sources which can be used to validatemore » analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers and books reviewed is given in the Appendix. Available validation documents and guides are listed therein; each has a brief description of application and use. In the referenced sources, there are varying approches to validation and varying descriptions of the valication process at different stages in method development. This discussion focuses on valication and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all referenced documents were published in English.« less

  14. Predictive Validation of an Influenza Spread Model

    PubMed Central

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  15. External validation of preexisting first trimester preeclampsia prediction models.

    PubMed

    Allen, Rebecca E; Zamora, Javier; Arroyo-Manzano, David; Velauthar, Luxmilar; Allotey, John; Thangaratinam, Shakila; Aquilina, Joseph

    2017-10-01

    To validate the increasing number of prognostic models being developed for preeclampsia using our own prospective study. A systematic review of literature that assessed biomarkers, uterine artery Doppler and maternal characteristics in the first trimester for the prediction of preeclampsia was performed and models selected based on predefined criteria. Validation was performed by applying the regression coefficients that were published in the different derivation studies to our cohort. We assessed the models discrimination ability and calibration. Twenty models were identified for validation. The discrimination ability observed in derivation studies (Area Under the Curves) ranged from 0.70 to 0.96 when these models were validated against the validation cohort, these AUC varied importantly, ranging from 0.504 to 0.833. Comparing Area Under the Curves obtained in the derivation study to those in the validation cohort we found statistically significant differences in several studies. There currently isn't a definitive prediction model with adequate ability to discriminate for preeclampsia, which performs as well when applied to a different population and can differentiate well between the highest and lowest risk groups within the tested population. The pre-existing large number of models limits the value of further model development and future research should be focussed on further attempts to validate existing models and assessing whether implementation of these improves patient care. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  16. An analytical method for designing low noise helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Bossler, R. B., Jr.; Bowes, M. A.; Royal, A. C.

    1978-01-01

    The development and experimental validation of a method for analytically modeling the noise mechanism in the helicopter geared power transmission systems is described. This method can be used within the design process to predict interior noise levels and to investigate the noise reducing potential of alternative transmission design details. Examples are discussed.

  17. Predicting functional outcomes among college drinkers: reliability and predictive validity of the Young Adult Alcohol Consequences Questionnaire.

    PubMed

    Read, Jennifer P; Merrill, Jennifer E; Kahler, Christopher W; Strong, David R

    2007-11-01

    Heavy drinking and associated consequences are widespread among U.S. college students. Recently, Read et al. (Read, J. P., Kahler, C. W., Strong, D., & Colder, C. R. (2006). Development and preliminary validation of the Young Adult Alcohol Consequences Questionnaire. Journal of Studies on Alcohol, 67, 169-178) developed the Young Adult Alcohol Consequences Questionnaire (YAACQ) to assess the broad range of consequences that may result from heavy drinking in the college milieu. In the present study, we sought to add to the psychometric validation of this measure by employing a prospective design to examine the test-retest reliability, concurrent validity, and predictive validity of the YAACQ. We also sought to examine the utility of the YAACQ administered early in the semester in the prediction of functional outcomes later in the semester, including the persistence of heavy drinking, and academic functioning. Ninety-two college students (48 females) completed a self-report assessment battery during the first weeks of the Fall semester, and approximately one week later. Additionally, 64 subjects (37 females) participated at an optional third time point at the end of the semester. Overall, the YAACQ demonstrated strong internal consistency, test-retest reliability, and concurrent and predictive validity. YAACQ scores also were predictive of both drinking frequency, and "binge" drinking frequency. YAACQ total scores at baseline were an early indicator of academic performance later in the semester, with greater number of total consequences experienced being negatively associated with end-of-semester grade point average. Specific YAACQ subscale scores (Impaired Control, Dependence Symptoms, Blackout Drinking) showed unique prediction of persistent drinking and academic outcomes.

  18. Validation of urban freeway models. [supporting datasets

    DOT National Transportation Integrated Search

    2015-01-01

    The goal of the SHRP 2 Project L33 Validation of Urban Freeway Models was to assess and enhance the predictive travel time reliability models developed in the SHRP 2 Project L03, Analytic Procedures for Determining the Impacts of Reliability Mitigati...

  19. An analytical model of leakage neutron equivalent dose for passively-scattered proton radiotherapy and validation with measurements.

    PubMed

    Schneider, Christopher; Newhauser, Wayne; Farah, Jad

    2015-05-18

    Exposure to stray neutrons increases the risk of second cancer development after proton therapy. Previously reported analytical models of this exposure were difficult to configure and had not been investigated below 100 MeV proton energy. The purposes of this study were to test an analytical model of neutron equivalent dose per therapeutic absorbed dose  at 75 MeV and to improve the model by reducing the number of configuration parameters and making it continuous in proton energy from 100 to 250 MeV. To develop the analytical model, we used previously published H/D values in water from Monte Carlo simulations of a general-purpose beamline for proton energies from 100 to 250 MeV. We also configured and tested the model on in-air neutron equivalent doses measured for a 75 MeV ocular beamline. Predicted H/D values from the analytical model and Monte Carlo agreed well from 100 to 250 MeV (10% average difference). Predicted H/D values from the analytical model also agreed well with measurements at 75 MeV (15% average difference). The results indicate that analytical models can give fast, reliable calculations of neutron exposure after proton therapy. This ability is absent in treatment planning systems but vital to second cancer risk estimation.

  20. Analytic validation and real-time clinical application of an amplicon-based targeted gene panel for advanced cancer

    PubMed Central

    Wing, Michele R.; Reeser, Julie W.; Smith, Amy M.; Reeder, Matthew; Martin, Dorrelyn; Jewell, Benjamin M.; Datta, Jharna; Miya, Jharna; Monk, J. Paul; Mortazavi, Amir; Otterson, Gregory A.; Goldberg, Richard M.; VanDeusen, Jeffrey B.; Cole, Sharon; Dittmar, Kristin; Jaiswal, Sunny; Kinzie, Matthew; Waikhom, Suraj; Freud, Aharon G.; Zhou, Xiao-Ping; Chen, Wei; Bhatt, Darshna; Roychowdhury, Sameek

    2017-01-01

    Multiplex somatic testing has emerged as a strategy to test patients with advanced cancer. We demonstrate our analytic validation approach for a gene hotspot panel and real-time prospective clinical application for any cancer type. The TruSight Tumor 26 assay amplifies 85 somatic hotspot regions across 26 genes. Using cell line and tumor mixes, we observed that 100% of the 14,715 targeted bases had at least 1000x raw coverage. We determined the sensitivity (100%, 95% CI: 96-100%), positive predictive value (100%, 95% CI: 96-100%), reproducibility (100% concordance), and limit of detection (3% variant allele frequency at 1000x read depth) of this assay to detect single nucleotide variants and small insertions and deletions. Next, we applied the assay prospectively in a clinical tumor sequencing study to evaluate 174 patients with metastatic or advanced cancer, including frozen tumors, formalin-fixed tumors, and enriched peripheral blood mononuclear cells in hematologic cancers. We reported one or more somatic mutations in 89 (53%) of the sequenced tumors (167 passing quality filters). Forty-three of these patients (26%) had mutations that would enable eligibility for targeted therapies. This study demonstrates the validity and feasibility of applying TruSight Tumor 26 for pan-cancer testing using multiple specimen types. PMID:29100271

  1. Testing the Predictive Validity of the Hendrich II Fall Risk Model.

    PubMed

    Jung, Hyesil; Park, Hyeoun-Ae

    2018-03-01

    Cumulative data on patient fall risk have been compiled in electronic medical records systems, and it is possible to test the validity of fall-risk assessment tools using these data between the times of admission and occurrence of a fall. The Hendrich II Fall Risk Model scores assessed during three time points of hospital stays were extracted and used for testing the predictive validity: (a) upon admission, (b) when the maximum fall-risk score from admission to falling or discharge, and (c) immediately before falling or discharge. Predictive validity was examined using seven predictive indicators. In addition, logistic regression analysis was used to identify factors that significantly affect the occurrence of a fall. Among the different time points, the maximum fall-risk score assessed between admission and falling or discharge showed the best predictive performance. Confusion or disorientation and having a poor ability to rise from a sitting position were significant risk factors for a fall.

  2. Transformational and transactional leadership: a meta-analytic test of their relative validity.

    PubMed

    Judge, Timothy A; Piccolo, Ronald F

    2004-10-01

    This study provided a comprehensive examination of the full range of transformational, transactional, and laissez-faire leadership. Results (based on 626 correlations from 87 sources) revealed an overall validity of .44 for transformational leadership, and this validity generalized over longitudinal and multisource designs. Contingent reward (.39) and laissez-faire (-.37) leadership had the next highest overall relations; management by exception (active and passive) was inconsistently related to the criteria. Surprisingly, there were several criteria for which contingent reward leadership had stronger relations than did transformational leadership. Furthermore, transformational leadership was strongly correlated with contingent reward (.80) and laissez-faire (-.65) leadership. Transformational and contingent reward leadership generally predicted criteria controlling for the other leadership dimensions, although transformational leadership failed to predict leader job performance. (c) 2004 APA, all rights reserved

  3. Validation of Accelerometer Prediction Equations in Children with Chronic Disease.

    PubMed

    Stephens, Samantha; Takken, Tim; Esliger, Dale W; Pullenayegum, Eleanor; Beyene, Joseph; Tremblay, Mark; Schneiderman, Jane; Biggar, Doug; Longmuir, Pat; McCrindle, Brian; Abad, Audrey; Ignas, Dan; Van Der Net, Janjaap; Feldman, Brian

    2016-02-01

    The purpose of this study was to assess the criterion validity of existing accelerometer-based energy expenditure (EE) prediction equations among children with chronic conditions, and to develop new prediction equations. Children with congenital heart disease (CHD), cystic fibrosis (CF), dermatomyositis (JDM), juvenile arthritis (JA), inherited muscle disease (IMD), and hemophilia (HE) completed 7 tasks while EE was measured using indirect calorimetry with counts determined by accelerometer. Agreement between predicted EE and measured EE was assessed. Disease-specific equations and cut points were developed and cross-validated. In total, 196 subjects participated. One participant dropped out before testing due to time constraints, while 15 CHD, 32 CF, 31 JDM, 31 JA, 30 IMD, 28 HE, and 29 healthy controls completed the study. Agreement between predicted and measured EE varied across disease group and ranged from (ICC) .13-.46. Disease-specific prediction equations exhibited a range of results (ICC .62-.88) (SE 0.45-0.78). In conclusion, poor agreement was demonstrated using current prediction equations in children with chronic conditions. Disease-specific equations and cut points were developed.

  4. Predicting Blunt Cerebrovascular Injury in Pediatric Trauma: Validation of the “Utah Score”

    PubMed Central

    Ravindra, Vijay M.; Bollo, Robert J.; Sivakumar, Walavan; Akbari, Hassan; Naftel, Robert P.; Limbrick, David D.; Jea, Andrew; Gannon, Stephen; Shannon, Chevis; Birkas, Yekaterina; Yang, George L.; Prather, Colin T.; Kestle, John R.

    2017-01-01

    Abstract Risk factors for blunt cerebrovascular injury (BCVI) may differ between children and adults, suggesting that children at low risk for BCVI after trauma receive unnecessary computed tomography angiography (CTA) and high-dose radiation. We previously developed a score for predicting pediatric BCVI based on retrospective cohort analysis. Our objective is to externally validate this prediction score with a retrospective multi-institutional cohort. We included patients who underwent CTA for traumatic cranial injury at four pediatric Level I trauma centers. Each patient in the validation cohort was scored using the “Utah Score” and classified as high or low risk. Before analysis, we defined a misclassification rate <25% as validating the Utah Score. Six hundred forty-five patients (mean age 8.6 ± 5.4 years; 63.4% males) underwent screening for BCVI via CTA. The validation cohort was 411 patients from three sites compared with the training cohort of 234 patients. Twenty-two BCVIs (5.4%) were identified in the validation cohort. The Utah Score was significantly associated with BCVIs in the validation cohort (odds ratio 8.1 [3.3, 19.8], p < 0.001) and discriminated well in the validation cohort (area under the curve 72%). When the Utah Score was applied to the validation cohort, the sensitivity was 59%, specificity was 85%, positive predictive value was 18%, and negative predictive value was 97%. The Utah Score misclassified 16.6% of patients in the validation cohort. The Utah Score for predicting BCVI in pediatric trauma patients was validated with a low misclassification rate using a large, independent, multicenter cohort. Its implementation in the clinical setting may reduce the use of CTA in low-risk patients. PMID:27297774

  5. The Predictive Validity of the University Student Selection Examination

    ERIC Educational Resources Information Center

    Karakaya, Ismail; Tavsancil, Ezel

    2008-01-01

    The main purpose of this study is to investigate the predictive validity of the 2003 University Student Selection Examination (OSS). For this purpose, freshman grade point average (FGPA) in higher education was predicted by raw scores, standard scores, and placement scores (YEP). This study has been conducted on a research group. In this study,…

  6. Standardization, evaluation and early-phase method validation of an analytical scheme for batch-consistency N-glycosylation analysis of recombinant produced glycoproteins.

    PubMed

    Zietze, Stefan; Müller, Rainer H; Brecht, René

    2008-03-01

    In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.

  7. Experimental validation of predicted cancer genes using FRET

    NASA Astrophysics Data System (ADS)

    Guala, Dimitri; Bernhem, Kristoffer; Ait Blal, Hammou; Jans, Daniel; Lundberg, Emma; Brismar, Hjalmar; Sonnhammer, Erik L. L.

    2018-07-01

    Huge amounts of data are generated in genome wide experiments, designed to investigate diseases with complex genetic causes. Follow up of all potential leads produced by such experiments is currently cost prohibitive and time consuming. Gene prioritization tools alleviate these constraints by directing further experimental efforts towards the most promising candidate targets. Recently a gene prioritization tool called MaxLink was shown to outperform other widely used state-of-the-art prioritization tools in a large scale in silico benchmark. An experimental validation of predictions made by MaxLink has however been lacking. In this study we used Fluorescence Resonance Energy Transfer, an established experimental technique for detection of protein-protein interactions, to validate potential cancer genes predicted by MaxLink. Our results provide confidence in the use of MaxLink for selection of new targets in the battle with polygenic diseases.

  8. Cross-validation of the Beunen-Malina method to predict adult height.

    PubMed

    Beunen, Gaston P; Malina, Robert M; Freitas, Duarte I; Maia, José A; Claessens, Albrecht L; Gouveia, Elvio R; Lefevre, Johan

    2010-08-01

    The purpose of this study was to cross-validate the Beunen-Malina method for non-invasive prediction of adult height. Three hundred and eight boys aged 13, 14, 15 and 16 years from the Madeira Growth Study were observed at annual intervals in 1996, 1997 and 1998 and re-measured 7-8 years later. Height, sitting height and the triceps and subscapular skinfolds were measured; skeletal age was assessed using the Tanner-Whitehouse 2 method. Adult height was measured and predicted using the Beunen-Malina method. Maturity groups were classified using relative skeletal age (skeletal age minus chronological age). Pearson correlations, mean differences and standard errors of estimate (SEE) were calculated. Age-specific correlations between predicted and measured adult height vary between 0.70 and 0.85, while age-specific SEE varies between 3.3 and 4.7 cm. The correlations and SEE are similar to those obtained in the development of the original Beunen-Malina method. The Beunen-Malina method is a valid method to predict adult height in adolescent boys and can be used in European populations or populations from European ancestry. Percentage of predicted adult height is a non-invasive valid method to assess biological maturity.

  9. Analytical method for the identification and assay of 12 phthalates in cosmetic products: application of the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques".

    PubMed

    Gimeno, Pascal; Maggio, Annie-Françoise; Bousquet, Claudine; Quoirez, Audrey; Civade, Corinne; Bonnet, Pierre-Antoine

    2012-08-31

    Esters of phthalic acid, more commonly named phthalates, may be present in cosmetic products as ingredients or contaminants. Their presence as contaminant can be due to the manufacturing process, to raw materials used or to the migration of phthalates from packaging when plastic (polyvinyl chloride--PVC) is used. 8 phthalates (DBP, DEHP, BBP, DMEP, DnPP, DiPP, DPP, and DiBP), classified H360 or H361, are forbidden in cosmetics according to the European regulation on cosmetics 1223/2009. A GC/MS method was developed for the assay of 12 phthalates in cosmetics, including the 8 phthalates regulated. Analyses are carried out on a GC/MS system with electron impact ionization mode (EI). The separation of phthalates is obtained on a cross-linked 5%-phenyl/95%-dimethylpolysiloxane capillary column 30 m × 0.25 mm (i.d.) × 0.25 mm film thickness using a temperature gradient. Phthalate quantification is performed by external calibration using an internal standard. Validation elements obtained on standard solutions, highlight a satisfactory system conformity (resolution>1.5), a common quantification limit at 0.25 ng injected, an acceptable linearity between 0.5 μg mL⁻¹ and 5.0 μg mL⁻¹ as well as a precision and an accuracy in agreement with in-house specifications. Cosmetic samples ready for analytical injection are analyzed after a dilution in ethanol whereas more complex cosmetic matrices, like milks and creams, are assayed after a liquid/liquid extraction using ter-butyl methyl ether (TBME). Depending on the type of cosmetics analyzed, the common limits of quantification for the 12 phthalates were set at 0.5 or 2.5 μg g⁻¹. All samples were assayed using the analytical approach described in the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques". This analytical protocol is particularly adapted when it is not possible to make reconstituted sample matrices. Copyright © 2012

  10. New Perspectives on the Validity of the "GRE"® General Test for Predicting Graduate School Grades. ETS GRE® Board Research Report. ETS GRE®-14-03. ETS Research Report. RR-14-26

    ERIC Educational Resources Information Center

    Klieger, David M.; Cline, Frederick A.; Holtzman, Steven L.; Minsky, Jennifer L.; Lorenz, Florian

    2014-01-01

    Given the serious consequences of making ill-fated admissions and funding decisions for applicants to graduate and professional school, it is important to rely on sound evidence to optimize such judgments. Previous meta-analytic research has demonstrated the generalizable validity of the "GRE"® General Test for predicting academic…

  11. Predictive value and construct validity of the work functioning screener-healthcare (WFS-H).

    PubMed

    Boezeman, Edwin J; Nieuwenhuijsen, Karen; Sluiter, Judith K

    2016-05-25

    To test the predictive value and convergent construct validity of a 6-item work functioning screener (WFS-H). Healthcare workers (249 nurses) completed a questionnaire containing the work functioning screener (WFS-H) and a work functioning instrument (NWFQ) measuring the following: cognitive aspects of task execution and general incidents, avoidance behavior, conflicts and irritation with colleagues, impaired contact with patients and their family, and level of energy and motivation. Productivity and mental health were also measured. Negative and positive predictive values, AUC values, and sensitivity and specificity were calculated to examine the predictive value of the screener. Correlation analysis was used to examine the construct validity. The screener had good predictive value, since the results showed that a negative screener score is a strong indicator of work functioning not hindered by mental health problems (negative predictive values: 94%-98%; positive predictive values: 21%-36%; AUC:.64-.82; sensitivity: 42%-76%; and specificity 85%-87%). The screener has good construct validity due to moderate, but significant (p<.001), associations with productivity (r=.51), mental health (r=.48), and distress (r=.47). The screener (WFS-H) had good predictive value and good construct validity. Its score offers occupational health professionals a helpful preliminary insight into the work functioning of healthcare workers.

  12. Experimental validation of boundary element methods for noise prediction

    NASA Technical Reports Server (NTRS)

    Seybert, A. F.; Oswald, Fred B.

    1992-01-01

    Experimental validation of methods to predict radiated noise is presented. A combined finite element and boundary element model was used to predict the vibration and noise of a rectangular box excited by a mechanical shaker. The predicted noise was compared to sound power measured by the acoustic intensity method. Inaccuracies in the finite element model shifted the resonance frequencies by about 5 percent. The predicted and measured sound power levels agree within about 2.5 dB. In a second experiment, measured vibration data was used with a boundary element model to predict noise radiation from the top of an operating gearbox. The predicted and measured sound power for the gearbox agree within about 3 dB.

  13. Clinical validation of an epigenetic assay to predict negative histopathological results in repeat prostate biopsies.

    PubMed

    Partin, Alan W; Van Neste, Leander; Klein, Eric A; Marks, Leonard S; Gee, Jason R; Troyer, Dean A; Rieger-Christ, Kimberly; Jones, J Stephen; Magi-Galluzzi, Cristina; Mangold, Leslie A; Trock, Bruce J; Lance, Raymond S; Bigley, Joseph W; Van Criekinge, Wim; Epstein, Jonathan I

    2014-10-01

    The DOCUMENT multicenter trial in the United States validated the performance of an epigenetic test as an independent predictor of prostate cancer risk to guide decision making for repeat biopsy. Confirming an increased negative predictive value could help avoid unnecessary repeat biopsies. We evaluated the archived, cancer negative prostate biopsy core tissue samples of 350 subjects from a total of 5 urological centers in the United States. All subjects underwent repeat biopsy within 24 months with a negative (controls) or positive (cases) histopathological result. Centralized blinded pathology evaluation of the 2 biopsy series was performed in all available subjects from each site. Biopsies were epigenetically profiled for GSTP1, APC and RASSF1 relative to the ACTB reference gene using quantitative methylation specific polymerase chain reaction. Predetermined analytical marker cutoffs were used to determine assay performance. Multivariate logistic regression was used to evaluate all risk factors. The epigenetic assay resulted in a negative predictive value of 88% (95% CI 85-91). In multivariate models correcting for age, prostate specific antigen, digital rectal examination, first biopsy histopathological characteristics and race the test proved to be the most significant independent predictor of patient outcome (OR 2.69, 95% CI 1.60-4.51). The DOCUMENT study validated that the epigenetic assay was a significant, independent predictor of prostate cancer detection in a repeat biopsy collected an average of 13 months after an initial negative result. Due to its 88% negative predictive value adding this epigenetic assay to other known risk factors may help decrease unnecessary repeat prostate biopsies. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  14. Machine learning and predictive data analytics enabling metrology and process control in IC fabrication

    NASA Astrophysics Data System (ADS)

    Rana, Narender; Zhang, Yunlin; Wall, Donald; Dirahoui, Bachir; Bailey, Todd C.

    2015-03-01

    Integrate circuit (IC) technology is going through multiple changes in terms of patterning techniques (multiple patterning, EUV and DSA), device architectures (FinFET, nanowire, graphene) and patterning scale (few nanometers). These changes require tight controls on processes and measurements to achieve the required device performance, and challenge the metrology and process control in terms of capability and quality. Multivariate data with complex nonlinear trends and correlations generally cannot be described well by mathematical or parametric models but can be relatively easily learned by computing machines and used to predict or extrapolate. This paper introduces the predictive metrology approach which has been applied to three different applications. Machine learning and predictive analytics have been leveraged to accurately predict dimensions of EUV resist patterns down to 18 nm half pitch leveraging resist shrinkage patterns. These patterns could not be directly and accurately measured due to metrology tool limitations. Machine learning has also been applied to predict the electrical performance early in the process pipeline for deep trench capacitance and metal line resistance. As the wafer goes through various processes its associated cost multiplies. It may take days to weeks to get the electrical performance readout. Predicting the electrical performance early on can be very valuable in enabling timely actionable decision such as rework, scrap, feedforward, feedback predicted information or information derived from prediction to improve or monitor processes. This paper provides a general overview of machine learning and advanced analytics application in the advanced semiconductor development and manufacturing.

  15. Predicting the Development of Analytical and Creative Abilities in Upper Elementary Grades

    ERIC Educational Resources Information Center

    Gubbels, Joyce; Segers, Eliane; Verhoeven, Ludo

    2017-01-01

    In some models, intelligence has been described as a multidimensional construct comprising both analytical and creative abilities. In addition, intelligence is considered to be dynamic rather than static. A structural equation model was used to examine the predictive role of cognitive (visual short-term memory, verbal short-term memory, selective…

  16. Random Forest as a Predictive Analytics Alternative to Regression in Institutional Research

    ERIC Educational Resources Information Center

    He, Lingjun; Levine, Richard A.; Fan, Juanjuan; Beemer, Joshua; Stronach, Jeanne

    2018-01-01

    In institutional research, modern data mining approaches are seldom considered to address predictive analytics problems. The goal of this paper is to highlight the advantages of tree-based machine learning algorithms over classic (logistic) regression methods for data-informed decision making in higher education problems, and stress the success of…

  17. Tire Changes, Fresh Air, and Yellow Flags: Challenges in Predictive Analytics for Professional Racing.

    PubMed

    Tulabandhula, Theja; Rudin, Cynthia

    2014-06-01

    Our goal is to design a prediction and decision system for real-time use during a professional car race. In designing a knowledge discovery process for racing, we faced several challenges that were overcome only when domain knowledge of racing was carefully infused within statistical modeling techniques. In this article, we describe how we leveraged expert knowledge of the domain to produce a real-time decision system for tire changes within a race. Our forecasts have the potential to impact how racing teams can optimize strategy by making tire-change decisions to benefit their rank position. Our work significantly expands previous research on sports analytics, as it is the only work on analytical methods for within-race prediction and decision making for professional car racing.

  18. DEVELOPMENT AND VALIDATION OF ANALYTICAL METHODS FOR ENUMERATION OF FECAL INDICATORS AND EMERGING CHEMICAL CONTAMINANTS IN BIOSOLIDS

    EPA Science Inventory

    In 2002 the National Research Council (NRC) issued a report which identified a number of issues regarding biosolids land application practices and pointed out the need for improved and validated analytical techniques for regulated indicator organisms and pathogens. They also call...

  19. Role of learning potential in cognitive remediation: Construct and predictive validity.

    PubMed

    Davidson, Charlie A; Johannesen, Jason K; Fiszdon, Joanna M

    2016-03-01

    The construct, convergent, discriminant, and predictive validity of Learning Potential (LP) was evaluated in a trial of cognitive remediation for adults with schizophrenia-spectrum disorders. LP utilizes a dynamic assessment approach to prospectively estimate an individual's learning capacity if provided the opportunity for specific related learning. LP was assessed in 75 participants at study entry, of whom 41 completed an eight-week cognitive remediation (CR) intervention, and 22 received treatment-as-usual (TAU). LP was assessed in a "test-train-test" verbal learning paradigm. Incremental predictive validity was assessed as the degree to which LP predicted memory skill acquisition above and beyond prediction by static verbal learning ability. Examination of construct validity confirmed that LP scores reflected use of trained semantic clustering strategy. LP scores correlated with executive functioning and education history, but not other demographics or symptom severity. Following the eight-week active phase, TAU evidenced little substantial change in skill acquisition outcomes, which related to static baseline verbal learning ability but not LP. For the CR group, LP significantly predicted skill acquisition in domains of verbal and visuospatial memory, but not auditory working memory. Furthermore, LP predicted skill acquisition incrementally beyond relevant background characteristics, symptoms, and neurocognitive abilities. Results suggest that LP assessment can significantly improve prediction of specific skill acquisition with cognitive training, particularly for the domain assessed, and thereby may prove useful in individualization of treatment. Published by Elsevier B.V.

  20. Analytical techniques and method validation for the measurement of selected semivolatile and nonvolatile organofluorochemicals in air.

    PubMed

    Reagen, William K; Lindstrom, Kent R; Thompson, Kathy L; Flaherty, John M

    2004-09-01

    The widespread use of semi- and nonvolatile organofluorochemicals in industrial facilities, concern about their persistence, and relatively recent advancements in liquid chromatography/mass spectrometry (LC/MS) technology have led to the development of new analytical methods to assess potential worker exposure to airborne organofluorochemicals. Techniques were evaluated for the determination of 19 organofluorochemicals and for total fluorine in ambient air samples. Due to the potential biphasic nature of most of these fluorochemicals when airborne, Occupational Safety and Health Administration (OSHA) versatile sampler (OVS) tubes were used to simultaneously trap fluorochemical particulates and vapors from workplace air. Analytical methods were developed for OVS air samples to quantitatively analyze for total fluorine using oxygen bomb combustion/ion selective electrode and for 17 organofluorochemicals using LC/MS and gas chromatography/mass spectrometry (GC/MS). The experimental design for this validation was based on the National Institute of Occupational Safety and Health (NIOSH) Guidelines for Air Sampling and Analytical Method Development and Evaluation, with some revisions of the experimental design. The study design incorporated experiments to determine analytical recovery and stability, sampler capacity, the effect of some environmental parameters on recoveries, storage stability, limits of detection, precision, and accuracy. Fluorochemical mixtures were spiked onto each OVS tube over a range of 0.06-6 microg for each of 12 compounds analyzed by LC/MS and 0.3-30 microg for 5 compounds analyzed by GC/MS. These ranges allowed reliable quantitation at 0.001-0.1 mg/m3 in general for LC/MS analytes and 0.005-0.5 mg/m3 for GC/MS analytes when 60 L of air are sampled. The organofluorochemical exposure guideline (EG) is currently 0.1 mg/m3 for many analytes, with one exception being ammonium perfluorooctanoate (EG is 0.01 mg/m3). Total fluorine results may be used

  1. Experimental and analytical investigation of a modified ring cusp NSTAR engine

    NASA Technical Reports Server (NTRS)

    Sengupta, Anita

    2005-01-01

    A series of experimental measurements on a modified laboratory NSTAR engine were used to validate a zero dimensional analytical discharge performance model of a ring cusp ion thruster. The model predicts the discharge performance of a ring cusp NSTAR thruster as a function the magnetic field configuration, thruster geometry, and throttle level. Analytical formalisms for electron and ion confinement are used to predict the ionization efficiency for a given thruster design. Explicit determination of discharge loss and volume averaged plasma parameters are also obtained. The model was used to predict the performance of the nominal and modified three and four ring cusp 30-cm ion thruster configurations operating at the full power (2.3 kW) NSTAR throttle level. Experimental measurements of the modified engine configuration discharge loss compare well with the predicted value for propellant utilizations from 80 to 95%. The theory, as validated by experiment, indicates that increasing the magnetic strength of the minimum closed reduces maxwellian electron diffusion and electrostatically confines the ion population and subsequent loss to the anode wall. The theory also indicates that increasing the cusp strength and minimizing the cusp area improves primary electron confinement increasing the probability of an ionization collision prior to loss at the cusp.

  2. Analytical Modeling of Groundwater Seepages to St. Lucie Estuary

    NASA Astrophysics Data System (ADS)

    Lee, J.; Yeh, G.; Hu, G.

    2008-12-01

    In this paper, six analytical models describing hydraulic interaction of stream-aquifer systems were applied to St Lucie Estuary (SLE) River Estuaries. These are analytical solutions for: (1) flow from a finite aquifer to a canal, (2) flow from an infinite aquifer to a canal, (3) the linearized Laplace system in a seepage surface, (4) wave propagation in the aquifer, (5) potential flow through stratified unconfined aquifers, and (6) flow through stratified confined aquifers. Input data for analytical solutions were obtained from monitoring wells and river stages at seepage-meter sites. Four transects in the study area are available: Club Med, Harbour Ridge, Lutz/MacMillan, and Pendarvis Cove located in the St. Lucie River. The analytical models were first calibrated with seepage meter measurements and then used to estimate of groundwater discharges into St. Lucie River. From this process, analytical relationships between the seepage rate and river stages and/or groundwater tables were established to predict the seasonal and monthly variation in groundwater seepage into SLE. It was found the seepage rate estimations by analytical models agreed well with measured data for some cases but only fair for some other cases. This is not unexpected because analytical solutions have some inherently simplified assumptions, which may be more valid for some cases than the others. From analytical calculations, it is possible to predict approximate seepage rates in the study domain when the assumptions underlying these analytical models are valid. The finite and infinite aquifer models and the linearized Laplace method are good for sites Pendarvis Cove and Lutz/MacMillian, but fair for the other two sites. The wave propagation model gave very good agreement in phase but only fairly agreement in magnitude for all four sites. The stratified unconfined and confined aquifer models gave similarly good agreements with measurements at three sites but poorly at the Club Med site. None of

  3. Analytic Validation of Immunohistochemistry Assays: New Benchmark Data From a Survey of 1085 Laboratories.

    PubMed

    Stuart, Lauren N; Volmar, Keith E; Nowak, Jan A; Fatheree, Lisa A; Souers, Rhona J; Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Astles, J Rex; Nakhleh, Raouf E

    2017-09-01

    - A cooperative agreement between the College of American Pathologists (CAP) and the United States Centers for Disease Control and Prevention was undertaken to measure laboratories' awareness and implementation of an evidence-based laboratory practice guideline (LPG) on immunohistochemical (IHC) validation practices published in 2014. - To establish new benchmark data on IHC laboratory practices. - A 2015 survey on IHC assay validation practices was sent to laboratories subscribed to specific CAP proficiency testing programs and to additional nonsubscribing laboratories that perform IHC testing. Specific questions were designed to capture laboratory practices not addressed in a 2010 survey. - The analysis was based on responses from 1085 laboratories that perform IHC staining. Ninety-six percent (809 of 844) always documented validation of IHC assays. Sixty percent (648 of 1078) had separate procedures for predictive and nonpredictive markers, 42.7% (220 of 515) had procedures for laboratory-developed tests, 50% (349 of 697) had procedures for testing cytologic specimens, and 46.2% (363 of 785) had procedures for testing decalcified specimens. Minimum case numbers were specified by 85.9% (720 of 838) of laboratories for nonpredictive markers and 76% (584 of 768) for predictive markers. Median concordance requirements were 95% for both types. For initial validation, 75.4% (538 of 714) of laboratories adopted the 20-case minimum for nonpredictive markers and 45.9% (266 of 579) adopted the 40-case minimum for predictive markers as outlined in the 2014 LPG. The most common method for validation was correlation with morphology and expected results. Laboratories also reported which assay changes necessitated revalidation and their minimum case requirements. - Benchmark data on current IHC validation practices and procedures may help laboratories understand the issues and influence further refinement of LPG recommendations.

  4. Systematic review of the concurrent and predictive validity of MRI biomarkers in OA

    PubMed Central

    Hunter, D.J.; Zhang, W.; Conaghan, Philip G.; Hirko, K.; Menashe, L.; Li, L.; Reichmann, W.M.; Losina, E.

    2012-01-01

    SUMMARY Objective To summarize literature on the concurrent and predictive validity of MRI-based measures of osteoarthritis (OA) structural change. Methods An online literature search was conducted of the OVID, EMBASE, CINAHL, PsychInfo and Cochrane databases of articles published up to the time of the search, April 2009. 1338 abstracts obtained with this search were preliminarily screened for relevance by two reviewers. Of these, 243 were selected for data extraction for this analysis on validity as well as separate reviews on discriminate validity and diagnostic performance. Of these 142 manuscripts included data pertinent to concurrent validity and 61 manuscripts for the predictive validity review. For this analysis we extracted data on criterion (concurrent and predictive) validity from both longitudinal and cross-sectional studies for all synovial joint tissues as it relates to MRI measurement in OA. Results Concurrent validity of MRI in OA has been examined compared to symptoms, radiography, histology/pathology, arthroscopy, CT, and alignment. The relation of bone marrow lesions, synovitis and effusion to pain was moderate to strong. There was a weak or no relation of cartilage morphology or meniscal tears to pain. The relation of cartilage morphology to radiographic OA and radiographic joint space was inconsistent. There was a higher frequency of meniscal tears, synovitis and other features in persons with radiographic OA. The relation of cartilage to other constructs including histology and arthroscopy was stronger. Predictive validity of MRI in OA has been examined for ability to predict total knee replacement (TKR), change in symptoms, radiographic progression as well as MRI progression. Quantitative cartilage volume change and presence of cartilage defects or bone marrow lesions are potential predictors of TKR. Conclusion MRI has inherent strengths and unique advantages in its ability to visualize multiple individual tissue pathologies relating to pain

  5. Beyond Engagement Analytics: Which Online Mixed-Data Factors Predict Student Learning Outcomes?

    ERIC Educational Resources Information Center

    Strang, Kenneth David

    2017-01-01

    This mixed-method study focuses on online learning analytics, a research area of importance. Several important student attributes and their online activities are examined to identify what seems to work best to predict higher grades. The purpose is to explore the relationships between student grade and key learning engagement factors using a large…

  6. A three-step approach for the derivation and validation of high-performing predictive models using an operational dataset: congestive heart failure readmission case study.

    PubMed

    AbdelRahman, Samir E; Zhang, Mingyuan; Bray, Bruce E; Kawamoto, Kensaku

    2014-05-27

    The aim of this study was to propose an analytical approach to develop high-performing predictive models for congestive heart failure (CHF) readmission using an operational dataset with incomplete records and changing data over time. Our analytical approach involves three steps: pre-processing, systematic model development, and risk factor analysis. For pre-processing, variables that were absent in >50% of records were removed. Moreover, the dataset was divided into a validation dataset and derivation datasets which were separated into three temporal subsets based on changes to the data over time. For systematic model development, using the different temporal datasets and the remaining explanatory variables, the models were developed by combining the use of various (i) statistical analyses to explore the relationships between the validation and the derivation datasets; (ii) adjustment methods for handling missing values; (iii) classifiers; (iv) feature selection methods; and (iv) discretization methods. We then selected the best derivation dataset and the models with the highest predictive performance. For risk factor analysis, factors in the highest-performing predictive models were analyzed and ranked using (i) statistical analyses of the best derivation dataset, (ii) feature rankers, and (iii) a newly developed algorithm to categorize risk factors as being strong, regular, or weak. The analysis dataset consisted of 2,787 CHF hospitalizations at University of Utah Health Care from January 2003 to June 2013. In this study, we used the complete-case analysis and mean-based imputation adjustment methods; the wrapper subset feature selection method; and four ranking strategies based on information gain, gain ratio, symmetrical uncertainty, and wrapper subset feature evaluators. The best-performing models resulted from the use of a complete-case analysis derivation dataset combined with the Class-Attribute Contingency Coefficient discretization method and a voting

  7. Predictive classification of self-paced upper-limb analytical movements with EEG.

    PubMed

    Ibáñez, Jaime; Serrano, J I; del Castillo, M D; Minguez, J; Pons, J L

    2015-11-01

    The extent to which the electroencephalographic activity allows the characterization of movements with the upper limb is an open question. This paper describes the design and validation of a classifier of upper-limb analytical movements based on electroencephalographic activity extracted from intervals preceding self-initiated movement tasks. Features selected for the classification are subject specific and associated with the movement tasks. Further tests are performed to reject the hypothesis that other information different from the task-related cortical activity is being used by the classifiers. Six healthy subjects were measured performing self-initiated upper-limb analytical movements. A Bayesian classifier was used to classify among seven different kinds of movements. Features considered covered the alpha and beta bands. A genetic algorithm was used to optimally select a subset of features for the classification. An average accuracy of 62.9 ± 7.5% was reached, which was above the baseline level observed with the proposed methodology (30.2 ± 4.3%). The study shows how the electroencephalography carries information about the type of analytical movement performed with the upper limb and how it can be decoded before the movement begins. In neurorehabilitation environments, this information could be used for monitoring and assisting purposes.

  8. Multiple piezo-patch energy harvesters integrated to a thin plate with AC-DC conversion: analytical modeling and numerical validation

    NASA Astrophysics Data System (ADS)

    Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper

    2016-04-01

    Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.

  9. Development of an analytical-numerical model to predict radiant emission or absorption

    NASA Technical Reports Server (NTRS)

    Wallace, Tim L.

    1994-01-01

    The development of an analytical-numerical model to predict radiant emission or absorption is discussed. A voigt profile is assumed to predict the spectral qualities of a singlet atomic transition line for atomic species of interest to the OPAD program. The present state of this model is described in each progress report required under contract. Model and code development is guided by experimental data where available. When completed, the model will be used to provide estimates of specie erosion rates from spectral data collected from rocket exhaust plumes or other sources.

  10. Pulsed plane wave analytic solutions for generic shapes and the validation of Maxwell's equations solvers

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Vastano, John A.; Lomax, Harvard

    1992-01-01

    Generic shapes are subjected to pulsed plane waves of arbitrary shape. The resulting scattered electromagnetic fields are determined analytically. These fields are then computed efficiently at field locations for which numerically determined EM fields are required. Of particular interest are the pulsed waveform shapes typically utilized by radar systems. The results can be used to validate the accuracy of finite difference time domain Maxwell's equations solvers. A two-dimensional solver which is second- and fourth-order accurate in space and fourth-order accurate in time is examined. Dielectric media properties are modeled by a ramping technique which simplifies the associated gridding of body shapes. The attributes of the ramping technique are evaluated by comparison with the analytic solutions.

  11. The Role of Teamwork in the Analysis of Big Data: A Study of Visual Analytics and Box Office Prediction.

    PubMed

    Buchanan, Verica; Lu, Yafeng; McNeese, Nathan; Steptoe, Michael; Maciejewski, Ross; Cooke, Nancy

    2017-03-01

    Historically, domains such as business intelligence would require a single analyst to engage with data, develop a model, answer operational questions, and predict future behaviors. However, as the problems and domains become more complex, organizations are employing teams of analysts to explore and model data to generate knowledge. Furthermore, given the rapid increase in data collection, organizations are struggling to develop practices for intelligence analysis in the era of big data. Currently, a variety of machine learning and data mining techniques are available to model data and to generate insights and predictions, and developments in the field of visual analytics have focused on how to effectively link data mining algorithms with interactive visuals to enable analysts to explore, understand, and interact with data and data models. Although studies have explored the role of single analysts in the visual analytics pipeline, little work has explored the role of teamwork and visual analytics in the analysis of big data. In this article, we present an experiment integrating statistical models, visual analytics techniques, and user experiments to study the role of teamwork in predictive analytics. We frame our experiment around the analysis of social media data for box office prediction problems and compare the prediction performance of teams, groups, and individuals. Our results indicate that a team's performance is mediated by the team's characteristics such as openness of individual members to others' positions and the type of planning that goes into the team's analysis. These findings have important implications for how organizations should create teams in order to make effective use of information from their analytic models.

  12. TU-F-17A-03: An Analytical Respiratory Perturbation Model for Lung Motion Prediction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, G; Yuan, A; Wei, J

    2014-06-15

    Purpose: Breathing irregularity is common, causing unreliable prediction in tumor motion for correlation-based surrogates. Both tidal volume (TV) and breathing pattern (BP=ΔVthorax/TV, where TV=ΔVthorax+ΔVabdomen) affect lung motion in anterior-posterior and superior-inferior directions. We developed a novel respiratory motion perturbation (RMP) model in analytical form to account for changes in TV and BP in motion prediction from simulation to treatment. Methods: The RMP model is an analytical function of patient-specific anatomic and physiologic parameters. It contains a base-motion trajectory d(x,y,z) derived from a 4-dimensional computed tomography (4DCT) at simulation and a perturbation term Δd(ΔTV,ΔBP) accounting for deviation at treatment from simulation.more » The perturbation is dependent on tumor-specific location and patient-specific anatomy. Eleven patients with simulation and treatment 4DCT images were used to assess the RMP method in motion prediction from 4DCT1 to 4DCT2, and vice versa. For each patient, ten motion trajectories of corresponding points in the lower lobes were measured in both 4DCTs: one served as the base-motion trajectory and the other as the ground truth for comparison. In total, 220 motion trajectory predictions were assessed. The motion discrepancy between two 4DCTs for each patient served as a control. An established 5D motion model was used for comparison. Results: The average absolute error of RMP model prediction in superior-inferior direction is 1.6±1.8 mm, similar to 1.7±1.6 mm from the 5D model (p=0.98). Some uncertainty is associated with limited spatial resolution (2.5mm slice thickness) and temporal resolution (10-phases). Non-corrected motion discrepancy between two 4DCTs is 2.6±2.7mm, with the maximum of ±20mm, and correction is necessary (p=0.01). Conclusion: The analytical motion model predicts lung motion with accuracy similar to the 5D model. The analytical model is based on physical relationships

  13. Criterion for evaluating the predictive ability of nonlinear regression models without cross-validation.

    PubMed

    Kaneko, Hiromasa; Funatsu, Kimito

    2013-09-23

    We propose predictive performance criteria for nonlinear regression models without cross-validation. The proposed criteria are the determination coefficient and the root-mean-square error for the midpoints between k-nearest-neighbor data points. These criteria can be used to evaluate predictive ability after the regression models are updated, whereas cross-validation cannot be performed in such a situation. The proposed method is effective and helpful in handling big data when cross-validation cannot be applied. By analyzing data from numerical simulations and quantitative structural relationships, we confirm that the proposed criteria enable the predictive ability of the nonlinear regression models to be appropriately quantified.

  14. Validation of NASA Thermal Ice Protection Computer Codes. Part 1; Program Overview

    NASA Technical Reports Server (NTRS)

    Miller, Dean; Bond, Thomas; Sheldon, David; Wright, William; Langhals, Tammy; Al-Khalil, Kamel; Broughton, Howard

    1996-01-01

    The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center. LEWICE/Thermal (electrothermal deicing & anti-icing), and ANTICE (hot-gas & electrothermal anti-icing). The Thermal Code Validation effort was designated as a priority during a 1994 'peer review' of the NASA Lewis Icing program, and was implemented as a cooperative effort with industry. During April 1996, the first of a series of experimental validation tests was conducted in the NASA Lewis Icing Research Tunnel(IRT). The purpose of the April 96 test was to validate the electrothermal predictive capabilities of both LEWICE/Thermal, and ANTICE. A heavily instrumented test article was designed and fabricated for this test, with the capability of simulating electrothermal de-icing and anti-icing modes of operation. Thermal measurements were then obtained over a range of test conditions, for comparison with analytical predictions. This paper will present an overview of the test, including a detailed description of: (1) the validation process; (2) test article design; (3) test matrix development; and (4) test procedures. Selected experimental results will be presented for de-icing and anti-icing modes of operation. Finally, the status of the validation effort at this point will be summarized. Detailed comparisons between analytical predictions and experimental results are contained in the following two papers: 'Validation of NASA Thermal Ice Protection Computer Codes: Part 2- The Validation of LEWICE/Thermal' and 'Validation of NASA Thermal Ice Protection Computer Codes: Part 3-The Validation of ANTICE'

  15. What Is Trust? Ethics and Risk Governance in Precision Medicine and Predictive Analytics

    PubMed Central

    Adjekum, Afua; Ienca, Marcello

    2017-01-01

    Abstract Trust is a ubiquitous term used in emerging technology (e.g., Big Data, precision medicine), innovation policy, and governance literatures in particular. But what exactly is trust? Even though trust is considered a critical requirement for the successful deployment of precision medicine initiatives, nonetheless, there is a need for further conceptualization with regard to what qualifies as trust, and what factors might establish and sustain trust in precision medicine, predictive analytics, and large-scale biology. These new fields of 21st century medicine and health often deal with the “futures” and hence, trust gains a temporal and ever-present quality for both the present and the futures anticipated by new technologies and predictive analytics. We address these conceptual gaps that have important practical implications in the way we govern risk and unknowns associated with emerging technologies in biology, medicine, and health broadly. We provide an in-depth conceptual analysis and an operative definition of trust dynamics in precision medicine. In addition, we identify three main types of “trust facilitators”: (1) technical, (2) ethical, and (3) institutional. This three-dimensional framework on trust is necessary to building and maintaining trust in 21st century knowledge-based innovations that governments and publics invest for progressive societal change, development, and sustainable prosperity. Importantly, we analyze, identify, and deliberate on the dimensions of precision medicine and large-scale biology that have carved out trust as a pertinent tool to its success. Moving forward, we propose a “points to consider” on how best to enhance trust in precision medicine and predictive analytics. PMID:29257733

  16. What Is Trust? Ethics and Risk Governance in Precision Medicine and Predictive Analytics.

    PubMed

    Adjekum, Afua; Ienca, Marcello; Vayena, Effy

    2017-12-01

    Trust is a ubiquitous term used in emerging technology (e.g., Big Data, precision medicine), innovation policy, and governance literatures in particular. But what exactly is trust? Even though trust is considered a critical requirement for the successful deployment of precision medicine initiatives, nonetheless, there is a need for further conceptualization with regard to what qualifies as trust, and what factors might establish and sustain trust in precision medicine, predictive analytics, and large-scale biology. These new fields of 21st century medicine and health often deal with the "futures" and hence, trust gains a temporal and ever-present quality for both the present and the futures anticipated by new technologies and predictive analytics. We address these conceptual gaps that have important practical implications in the way we govern risk and unknowns associated with emerging technologies in biology, medicine, and health broadly. We provide an in-depth conceptual analysis and an operative definition of trust dynamics in precision medicine. In addition, we identify three main types of "trust facilitators": (1) technical, (2) ethical, and (3) institutional. This three-dimensional framework on trust is necessary to building and maintaining trust in 21st century knowledge-based innovations that governments and publics invest for progressive societal change, development, and sustainable prosperity. Importantly, we analyze, identify, and deliberate on the dimensions of precision medicine and large-scale biology that have carved out trust as a pertinent tool to its success. Moving forward, we propose a "points to consider" on how best to enhance trust in precision medicine and predictive analytics.

  17. Predictive Analytical Model for Isolator Shock-Train Location in a Mach 2.2 Direct-Connect Supersonic Combustion Tunnel

    NASA Astrophysics Data System (ADS)

    Lingren, Joe; Vanstone, Leon; Hashemi, Kelley; Gogineni, Sivaram; Donbar, Jeffrey; Akella, Maruthi; Clemens, Noel

    2016-11-01

    This study develops an analytical model for predicting the leading shock of a shock-train in the constant area isolator section in a Mach 2.2 direct-connect scramjet simulation tunnel. The effective geometry of the isolator is assumed to be a weakly converging duct owing to boundary-layer growth. For some given pressure rise across the isolator, quasi-1D equations relating to isentropic or normal shock flows can be used to predict the normal shock location in the isolator. The surface pressure distribution through the isolator was measured during experiments and both the actual and predicted locations can be calculated. Three methods of finding the shock-train location are examined, one based on the measured pressure rise, one using a non-physics-based control model, and one using the physics-based analytical model. It is shown that the analytical model performs better than the non-physics-based model in all cases. The analytic model is less accurate than the pressure threshold method but requires significantly less information to compute. In contrast to other methods for predicting shock-train location, this method is relatively accurate and requires as little as a single pressure measurement. This makes this method potentially useful for unstart control applications.

  18. A new framework to enhance the interpretation of external validation studies of clinical prediction models.

    PubMed

    Debray, Thomas P A; Vergouwe, Yvonne; Koffijberg, Hendrik; Nieboer, Daan; Steyerberg, Ewout W; Moons, Karel G M

    2015-03-01

    It is widely acknowledged that the performance of diagnostic and prognostic prediction models should be assessed in external validation studies with independent data from "different but related" samples as compared with that of the development sample. We developed a framework of methodological steps and statistical methods for analyzing and enhancing the interpretation of results from external validation studies of prediction models. We propose to quantify the degree of relatedness between development and validation samples on a scale ranging from reproducibility to transportability by evaluating their corresponding case-mix differences. We subsequently assess the models' performance in the validation sample and interpret the performance in view of the case-mix differences. Finally, we may adjust the model to the validation setting. We illustrate this three-step framework with a prediction model for diagnosing deep venous thrombosis using three validation samples with varying case mix. While one external validation sample merely assessed the model's reproducibility, two other samples rather assessed model transportability. The performance in all validation samples was adequate, and the model did not require extensive updating to correct for miscalibration or poor fit to the validation settings. The proposed framework enhances the interpretation of findings at external validation of prediction models. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  19. The influence of retrieval practice on metacognition: The contribution of analytic and non-analytic processes.

    PubMed

    Miller, Tyler M; Geraci, Lisa

    2016-05-01

    People may change their memory predictions after retrieval practice using naïve theories of memory and/or by using subjective experience - analytic and non-analytic processes respectively. The current studies disentangled contributions of each process. In one condition, learners studied paired-associates, made a memory prediction, completed a short-run of retrieval practice and made a second prediction. In another condition, judges read about a yoked learners' retrieval practice performance but did not participate in retrieval practice and therefore, could not use non-analytic processes for the second prediction. In Study 1, learners reduced their predictions following moderately difficult retrieval practice whereas judges increased their predictions. In Study 2, learners made lower adjusted predictions than judges following both easy and difficult retrieval practice. In Study 3, judge-like participants used analytic processes to report adjusted predictions. Overall, the results suggested non-analytic processes play a key role for participants to reduce their predictions after retrieval practice. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Analytical functions to predict cosmic-ray neutron spectra in the atmosphere.

    PubMed

    Sato, Tatsuhiko; Niita, Koji

    2006-09-01

    Estimation of cosmic-ray neutron spectra in the atmosphere has been an essential issue in the evaluation of the aircrew doses and the soft-error rates of semiconductor devices. We therefore performed Monte Carlo simulations for estimating neutron spectra using the PHITS code in adopting the nuclear data library JENDL-High-Energy file. Excellent agreements were observed between the calculated and measured spectra for a wide altitude range even at the ground level. Based on a comprehensive analysis of the simulation results, we propose analytical functions that can predict the cosmic-ray neutron spectra for any location in the atmosphere at altitudes below 20 km, considering the influences of local geometries such as ground and aircraft on the spectra. The accuracy of the analytical functions was well verified by various experimental data.

  1. Verification of an Analytical Method for Measuring Crystal Nucleation Rates in Glasses from DTA Data

    NASA Technical Reports Server (NTRS)

    Ranasinghe, K. S.; Wei, P. F.; Kelton, K. F.; Ray, C. S.; Day, D. E.

    2004-01-01

    A recently proposed analytical (DTA) method for estimating the nucleation rates in glasses has been evaluated by comparing experimental data with numerically computed nucleation rates for a model lithium disilicate glass. The time and temperature dependent nucleation rates were predicted using the model and compared with those values from an analysis of numerically calculated DTA curves. The validity of the numerical approach was demonstrated earlier by a comparison with experimental data. The excellent agreement between the nucleation rates from the model calculations and fiom the computer generated DTA data demonstrates the validity of the proposed analytical DTA method.

  2. Trends & Controversies: Sociocultural Predictive Analytics and Terrorism Deterrence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.; McGrath, Liam R.

    2011-08-12

    The use of predictive analytics to model terrorist rhetoric is highly instrumental in developing a strategy to deter terrorism. Traditional (e.g. Cold-War) deterrence methods are ineffective with terrorist groups such as al Qaida. Terrorists typically regard the prospect of death or loss of property as acceptable consequences of their struggle. Deterrence by threat of punishment is therefore fruitless. On the other hand, isolating terrorists from the community that may sympathize with their cause can have a decisive deterring outcome. Without the moral backing of a supportive audience, terrorism cannot be successfully framed as a justifiable political strategy and recruiting ismore » curtailed. Ultimately, terrorism deterrence is more effectively enforced by exerting influence to neutralize the communicative reach of terrorists.« less

  3. Validation and Inter-comparison Against Observations of GODAE Ocean View Ocean Prediction Systems

    NASA Astrophysics Data System (ADS)

    Xu, J.; Davidson, F. J. M.; Smith, G. C.; Lu, Y.; Hernandez, F.; Regnier, C.; Drevillon, M.; Ryan, A.; Martin, M.; Spindler, T. D.; Brassington, G. B.; Oke, P. R.

    2016-02-01

    For weather forecasts, validation of forecast performance is done at the end user level as well as by the meteorological forecast centers. In the development of Ocean Prediction Capacity, the same level of care for ocean forecast performance and validation is needed. Herein we present results from a validation against observations of 6 Global Ocean Forecast Systems under the GODAE OceanView International Collaboration Network. These systems include the Global Ocean Ice Forecast System (GIOPS) developed by the Government of Canada, two systems PSY3 and PSY4 from the French Mercator-Ocean Ocean Forecasting Group, the FOAM system from UK met office, HYCOM-RTOFS from NOAA/NCEP/NWA of USA, and the Australian Bluelink-OceanMAPS system from the CSIRO, the Australian Meteorological Bureau and the Australian Navy.The observation data used in the comparison are sea surface temperature, sub-surface temperature, sub-surface salinity, sea level anomaly, and sea ice total concentration data. Results of the inter-comparison demonstrate forecast performance limits, strengths and weaknesses of each of the six systems. This work establishes validation protocols and routines by which all new prediction systems developed under the CONCEPTS Collaborative Network will be benchmarked prior to approval for operations. This includes anticipated delivery of CONCEPTS regional prediction systems over the next two years including a pan Canadian 1/12th degree resolution ice ocean prediction system and limited area 1/36th degree resolution prediction systems. The validation approach of comparing forecasts to observations at the time and location of the observation is called Class 4 metrics. It has been adopted by major international ocean prediction centers, and will be recommended to JCOMM-WMO as routine validation approach for operational oceanography worldwide.

  4. Validity of the MCAT in Predicting Performance in the First Two Years of Medical School.

    ERIC Educational Resources Information Center

    Jones, Robert F.; Thomae-Forgues, Maria

    1984-01-01

    The first systematic summary of predictive validity research on the new Medical College Admission Test (MCAT) is presented. The results show that MCAT scores have significant predictive validity with respect to first- and second-year medical school course grades. Further directions for MCAT validity research are described. (Author/MLW)

  5. Semi-physiologic model validation and bioequivalence trials simulation to select the best analyte for acetylsalicylic acid.

    PubMed

    Cuesta-Gragera, Ana; Navarro-Fontestad, Carmen; Mangas-Sanjuan, Victor; González-Álvarez, Isabel; García-Arieta, Alfredo; Trocóniz, Iñaki F; Casabó, Vicente G; Bermejo, Marival

    2015-07-10

    The objective of this paper is to apply a previously developed semi-physiologic pharmacokinetic model implemented in NONMEM to simulate bioequivalence trials (BE) of acetyl salicylic acid (ASA) in order to validate the model performance against ASA human experimental data. ASA is a drug with first-pass hepatic and intestinal metabolism following Michaelis-Menten kinetics that leads to the formation of two main metabolites in two generations (first and second generation metabolites). The first aim was to adapt the semi-physiological model for ASA in NOMMEN using ASA pharmacokinetic parameters from literature, showing its sequential metabolism. The second aim was to validate this model by comparing the results obtained in NONMEM simulations with published experimental data at a dose of 1000 mg. The validated model was used to simulate bioequivalence trials at 3 dose schemes (100, 1000 and 3000 mg) and with 6 test formulations with decreasing in vivo dissolution rate constants versus the reference formulation (kD 8-0.25 h (-1)). Finally, the third aim was to determine which analyte (parent drug, first generation or second generation metabolite) was more sensitive to changes in formulation performance. The validation results showed that the concentration-time curves obtained with the simulations reproduced closely the published experimental data, confirming model performance. The parent drug (ASA) was the analyte that showed to be more sensitive to the decrease in pharmaceutical quality, with the highest decrease in Cmax and AUC ratio between test and reference formulations. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. The Predictive Validity of Dynamic Assessment: A Review

    ERIC Educational Resources Information Center

    Caffrey, Erin; Fuchs, Douglas; Fuchs, Lynn S.

    2008-01-01

    The authors report on a mixed-methods review of 24 studies that explores the predictive validity of dynamic assessment (DA). For 15 of the studies, they conducted quantitative analyses using Pearson's correlation coefficients. They descriptively examined the remaining studies to determine if their results were consistent with findings from the…

  7. Analytical Methodology for Predicting the Onset of Widespread Fatigue Damage in Fuselage Structure

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Newman, James C., Jr.; Piascik, Robert S.; Starnes, James H., Jr.

    1996-01-01

    NASA has developed a comprehensive analytical methodology for predicting the onset of widespread fatigue damage in fuselage structure. The determination of the number of flights and operational hours of aircraft service life that are related to the onset of widespread fatigue damage includes analyses for crack initiation, fatigue crack growth, and residual strength. Therefore, the computational capability required to predict analytically the onset of widespread fatigue damage must be able to represent a wide range of crack sizes from the material (microscale) level to the global structural-scale level. NASA studies indicate that the fatigue crack behavior in aircraft structure can be represented conveniently by the following three analysis scales: small three-dimensional cracks at the microscale level, through-the-thickness two-dimensional cracks at the local structural level, and long cracks at the global structural level. The computational requirements for each of these three analysis scales are described in this paper.

  8. Exact Analysis of Squared Cross-Validity Coefficient in Predictive Regression Models

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2009-01-01

    In regression analysis, the notion of population validity is of theoretical interest for describing the usefulness of the underlying regression model, whereas the presumably more important concept of population cross-validity represents the predictive effectiveness for the regression equation in future research. It appears that the inference…

  9. Empirical and semi-analytical models for predicting peak outflows caused by embankment dam failures

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Chen, Yunliang; Wu, Chao; Peng, Yong; Song, Jiajun; Liu, Wenjun; Liu, Xin

    2018-07-01

    Prediction of peak discharge of floods has attracted great attention for researchers and engineers. In present study, nine typical nonlinear mathematical models are established based on database of 40 historical dam failures. The first eight models that were developed with a series of regression analyses are purely empirical, while the last one is a semi-analytical approach that was derived from an analytical solution of dam-break floods in a trapezoidal channel. Water depth above breach invert (Hw), volume of water stored above breach invert (Vw), embankment length (El), and average embankment width (Ew) are used as independent variables to develop empirical formulas of estimating the peak outflow from breached embankment dams. It is indicated from the multiple regression analysis that a function using the former two variables (i.e., Hw and Vw) produce considerably more accurate results than that using latter two variables (i.e., El and Ew). It is shown that the semi-analytical approach works best in terms of both prediction accuracy and uncertainty, and the established empirical models produce considerably reasonable results except the model only using El. Moreover, present models have been compared with other models available in literature for estimating peak discharge.

  10. The Validity of Selection and Classification Procedures for Predicting Job Performance.

    DTIC Science & Technology

    1987-04-01

    lacholual or pulley Issues. They cemmunicate Me resulls of special analyses, Iantrim rp or phses of a teak, ad hasm quick macton werk. Paperm r reviw ...51 I. Alternative Selection Procedures ................. 56 J. Meta-Analyses of Validities ............. 58 K . Meta-Analytic Comparisons of...Aptitude Test Battery GM General Maintenance GS General Science GVN Cognitive Ability HS&T Health, Social and Technology K Motor Coordination KFM

  11. Advancing Continuous Predictive Analytics Monitoring: Moving from Implementation to Clinical Action in a Learning Health System.

    PubMed

    Keim-Malpass, Jessica; Kitzmiller, Rebecca R; Skeeles-Worley, Angela; Lindberg, Curt; Clark, Matthew T; Tai, Robert; Calland, James Forrest; Sullivan, Kevin; Randall Moorman, J; Anderson, Ruth A

    2018-06-01

    In the intensive care unit, clinicians monitor a diverse array of data inputs to detect early signs of impending clinical demise or improvement. Continuous predictive analytics monitoring synthesizes data from a variety of inputs into a risk estimate that clinicians can observe in a streaming environment. For this to be useful, clinicians must engage with the data in a way that makes sense for their clinical workflow in the context of a learning health system (LHS). This article describes the processes needed to evoke clinical action after initiation of continuous predictive analytics monitoring in an LHS. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Acute Brain Dysfunction: Development and Validation of a Daily Prediction Model.

    PubMed

    Marra, Annachiara; Pandharipande, Pratik P; Shotwell, Matthew S; Chandrasekhar, Rameela; Girard, Timothy D; Shintani, Ayumi K; Peelen, Linda M; Moons, Karl G M; Dittus, Robert S; Ely, E Wesley; Vasilevskis, Eduard E

    2018-03-24

    The goal of this study was to develop and validate a dynamic risk model to predict daily changes in acute brain dysfunction (ie, delirium and coma), discharge, and mortality in ICU patients. Using data from a multicenter prospective ICU cohort, a daily acute brain dysfunction-prediction model (ABD-pm) was developed by using multinomial logistic regression that estimated 15 transition probabilities (from one of three brain function states [normal, delirious, or comatose] to one of five possible outcomes [normal, delirious, comatose, ICU discharge, or died]) using baseline and daily risk factors. Model discrimination was assessed by using predictive characteristics such as negative predictive value (NPV). Calibration was assessed by plotting empirical vs model-estimated probabilities. Internal validation was performed by using a bootstrap procedure. Data were analyzed from 810 patients (6,711 daily transitions). The ABD-pm included individual risk factors: mental status, age, preexisting cognitive impairment, baseline and daily severity of illness, and daily administration of sedatives. The model yielded very high NPVs for "next day" delirium (NPV: 0.823), coma (NPV: 0.892), normal cognitive state (NPV: 0.875), ICU discharge (NPV: 0.905), and mortality (NPV: 0.981). The model demonstrated outstanding calibration when predicting the total number of patients expected to be in any given state across predicted risk. We developed and internally validated a dynamic risk model that predicts the daily risk for one of three cognitive states, ICU discharge, or mortality. The ABD-pm may be useful for predicting the proportion of patients for each outcome state across entire ICU populations to guide quality, safety, and care delivery activities. Copyright © 2018 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.

  13. Testing the Predictive Validity and Construct of Pathological Video Game Use

    PubMed Central

    Groves, Christopher L.; Gentile, Douglas; Tapscott, Ryan L.; Lynch, Paul J.

    2015-01-01

    Three studies assessed the construct of pathological video game use and tested its predictive validity. Replicating previous research, Study 1 produced evidence of convergent validity in 8th and 9th graders (N = 607) classified as pathological gamers. Study 2 replicated and extended the findings of Study 1 with college undergraduates (N = 504). Predictive validity was established in Study 3 by measuring cue reactivity to video games in college undergraduates (N = 254), such that pathological gamers were more emotionally reactive to and provided higher subjective appraisals of video games than non-pathological gamers and non-gamers. The three studies converged to show that pathological video game use seems similar to other addictions in its patterns of correlations with other constructs. Conceptual and definitional aspects of Internet Gaming Disorder are discussed. PMID:26694472

  14. Prediction and validation of blowout limits of co-flowing jet diffusion flames -- effect of dilution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karbasi, M.; Wierzba, I.

    1996-10-01

    The blowout limits of a co-flowing turbulent methane jet diffusion flame with addition of diluent in either jet fuel or surrounding air stream is studied both analytically and experimentally. Helium, nitrogen and carbon dioxide were employed as the diluents. Experiments indicated that an addition of diluents to the jet fuel or surrounding air stream decreased the stability limit of the jet diffusion flames. The strongest effect was observed with carbon dioxide as the diluent followed by nitrogen and then by helium. A model of extinction based on recognized criterion of the mixing time scale to characteristic combustion time scale ratiomore » using experimentally derived correlations is proposed. It is capable of predicting the large reduction of the jet blowout velocity due to a relatively small increase in the co-flow stream velocity along with an increase in the concentration of diluent in either the jet fuel or surrounding air stream. Experiments were carried out to validate the model. The predicted blowout velocities of turbulent jet diffusion flames obtained using this model are in good agreement with the corresponding experimental data.« less

  15. Predictive validity and reliability of the Braden scale for risk assessment of pressure ulcers in an intensive care unit.

    PubMed

    Lima-Serrano, M; González-Méndez, M I; Martín-Castaño, C; Alonso-Araujo, I; Lima-Rodríguez, J S

    2018-03-01

    Contribution to validation of the Braden scale in patients admitted to the ICU, based on an analysis of its reliability and predictive validity. An analytical, observational, longitudinal prospective study was carried out. Intensive Care Unit, Hospital Virgen del Rocío, Seville (Spain). Patients aged 18years or older and admitted for over 24hours to the ICU were included. Patients with pressure ulcers upon admission were excluded. A total of 335 patients were enrolled in two study periods of one month each. None. The presence of gradei-iv pressure ulcers was regarded as the main or dependent variable. Three categories were considered (demographic, clinical and prognostic) for the remaining variables. The incidence of patients who developed pressure ulcers was 8.1%. The proportion of gradei andii pressure ulcer was 40.6% and 59.4% respectively, highlighting the sacrum as the most frequently affected location. Cronbach's alpha coefficient in the assessments considered indicated good to moderate reliability. In the three evaluations made, a cutoff point of 12 was presented as optimal in the assessment of the first and second days of admission. In relation to the assessment of the day with minimum score, the optimal cutoff point was 10. The Braden scale shows insufficient predictive validity and poor precision for cutoff points of both 18 and 16, which are those accepted in the different clinical scenarios. Copyright © 2017 Elsevier España, S.L.U. y SEMNIM. All rights reserved.

  16. Analytical validation of a novel multiplex test for detection of advanced adenoma and colorectal cancer in symptomatic patients.

    PubMed

    Dillon, Roslyn; Croner, Lisa J; Bucci, John; Kairs, Stefanie N; You, Jia; Beasley, Sharon; Blimline, Mark; Carino, Rochele B; Chan, Vicky C; Cuevas, Danissa; Diggs, Jeff; Jennings, Megan; Levy, Jacob; Mina, Ginger; Yee, Alvin; Wilcox, Bruce

    2018-05-30

    Early detection of colorectal cancer (CRC) is key to reducing associated mortality. Despite the importance of early detection, approximately 40% of individuals in the United States between the ages of 50-75 have never been screened for CRC. The low compliance with colonoscopy and fecal-based screening may be addressed with a non-invasive alternative such as a blood-based test. We describe here the analytical validation of a multiplexed blood-based assay that measures the plasma concentrations of 15 proteins to assess advanced adenoma (AA) and CRC risk in symptomatic patients. The test was developed on an electrochemiluminescent immunoassay platform employing four multi-marker panels, to be implemented in the clinic as a laboratory developed test (LDT). Under the Clinical Laboratory Improvement Amendments (CLIA) and College of American Pathologists (CAP) regulations, a United States-based clinical laboratory utilizing an LDT must establish performance characteristics relating to analytical validity prior to releasing patient test results. This report describes a series of studies demonstrating the precision, accuracy, analytical sensitivity, and analytical specificity for each of the 15 assays, as required by CLIA/CAP. In addition, the report describes studies characterizing each of the assays' dynamic range, parallelism, tolerance to common interfering substances, spike recovery, and stability to sample freeze-thaw cycles. Upon completion of the analytical characterization, a clinical accuracy study was performed to evaluate concordance of AA and CRC classifier model calls using the analytical method intended for use in the clinic. Of 434 symptomatic patient samples tested, the percent agreement with original CRC and AA calls was 87% and 92% respectively. All studies followed CLSI guidelines and met the regulatory requirements for implementation of a new LDT. The results provide the analytical evidence to support the implementation of the novel multi-marker test as

  17. Observations on CFD Verification and Validation from the AIAA Drag Prediction Workshops

    NASA Technical Reports Server (NTRS)

    Morrison, Joseph H.; Kleb, Bil; Vassberg, John C.

    2014-01-01

    The authors provide observations from the AIAA Drag Prediction Workshops that have spanned over a decade and from a recent validation experiment at NASA Langley. These workshops provide an assessment of the predictive capability of forces and moments, focused on drag, for transonic transports. It is very difficult to manage the consistency of results in a workshop setting to perform verification and validation at the scientific level, but it may be sufficient to assess it at the level of practice. Observations thus far: 1) due to simplifications in the workshop test cases, wind tunnel data are not necessarily the “correct” results that CFD should match, 2) an average of core CFD data are not necessarily a better estimate of the true solution as it is merely an average of other solutions and has many coupled sources of variation, 3) outlier solutions should be investigated and understood, and 4) the DPW series does not have the systematic build up and definition on both the computational and experimental side that is required for detailed verification and validation. Several observations regarding the importance of the grid, effects of physical modeling, benefits of open forums, and guidance for validation experiments are discussed. The increased variation in results when predicting regions of flow separation and increased variation due to interaction effects, e.g., fuselage and horizontal tail, point out the need for validation data sets for these important flow phenomena. Experiences with a recent validation experiment at NASA Langley are included to provide guidance on validation experiments.

  18. Predictive validity of pre-admission assessments on medical student performance.

    PubMed

    Dabaliz, Al-Awwab; Kaadan, Samy; Dabbagh, M Marwan; Barakat, Abdulaziz; Shareef, Mohammad Abrar; Al-Tannir, Mohamad; Obeidat, Akef; Mohamed, Ayman

    2017-11-24

    To examine the predictive validity of pre-admission variables on students' performance in a medical school in Saudi Arabia. In this retrospective study, we collected admission and college performance data for 737 students in preclinical and clinical years. Data included high school scores and other standardized test scores, such as those of the National Achievement Test and the General Aptitude Test. Additionally, we included the scores of the Test of English as a Foreign Language (TOEFL) and the International English Language Testing System (IELTS) exams. Those datasets were then compared with college performance indicators, namely the cumulative Grade Point Average (cGPA) and progress test, using multivariate linear regression analysis. In preclinical years, both the National Achievement Test (p=0.04, B=0.08) and TOEFL (p=0.017, B=0.01) scores were positive predictors of cGPA, whereas the General Aptitude Test (p=0.048, B=-0.05) negatively predicted cGPA. Moreover, none of the pre-admission variables were predictive of progress test performance in the same group. On the other hand, none of the pre-admission variables were predictive of cGPA in clinical years. Overall, cGPA strongly predict-ed students' progress test performance (p<0.001 and B=19.02). Only the National Achievement Test and TOEFL significantly predicted performance in preclinical years. However, these variables do not predict progress test performance, meaning that they do not predict the functional knowledge reflected in the progress test. We report various strengths and deficiencies in the current medical college admission criteria, and call for employing more sensitive and valid ones that predict student performance and functional knowledge, especially in the clinical years.

  19. NCI-FDA Interagency Oncology Task Force Workshop Provides Guidance for Analytical Validation of Protein-based Multiplex Assays | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    An NCI-FDA Interagency Oncology Task Force (IOTF) Molecular Diagnostics Workshop was held on October 30, 2008 in Cambridge, MA, to discuss requirements for analytical validation of protein-based multiplex technologies in the context of its intended use. This workshop developed through NCI's Clinical Proteomic Technologies for Cancer initiative and the FDA focused on technology-specific analytical validation processes to be addressed prior to use in clinical settings. In making this workshop unique, a case study approach was used to discuss issues related to

  20. L-shaped piezoelectric motor--part II: analytical modeling.

    PubMed

    Avirovik, Dragan; Karami, M Amin; Inman, Daniel; Priya, Shashank

    2012-01-01

    This paper develops an analytical model for an L-shaped piezoelectric motor. The motor structure has been described in detail in Part I of this study. The coupling of the bending vibration mode of the bimorphs results in an elliptical motion at the tip. The emphasis of this paper is on the development of a precise analytical model which can predict the dynamic behavior of the motor based on its geometry. The motor was first modeled mechanically to identify the natural frequencies and mode shapes of the structure. Next, an electromechanical model of the motor was developed to take into account the piezoelectric effect, and dynamics of L-shaped piezoelectric motor were obtained as a function of voltage and frequency. Finally, the analytical model was validated by comparing it to experiment results and the finite element method (FEM). © 2012 IEEE

  1. Validating spatiotemporal predictions of an important pest of small grains.

    PubMed

    Merrill, Scott C; Holtzer, Thomas O; Peairs, Frank B; Lester, Philip J

    2015-01-01

    Arthropod pests are typically managed using tactics applied uniformly to the whole field. Precision pest management applies tactics under the assumption that within-field pest pressure differences exist. This approach allows for more precise and judicious use of scouting resources and management tactics. For example, a portion of a field delineated as attractive to pests may be selected to receive extra monitoring attention. Likely because of the high variability in pest dynamics, little attention has been given to developing precision pest prediction models. Here, multimodel synthesis was used to develop a spatiotemporal model predicting the density of a key pest of wheat, the Russian wheat aphid, Diuraphis noxia (Kurdjumov). Spatially implicit and spatially explicit models were synthesized to generate spatiotemporal pest pressure predictions. Cross-validation and field validation were used to confirm model efficacy. A strong within-field signal depicting aphid density was confirmed with low prediction errors. Results show that the within-field model predictions will provide higher-quality information than would be provided by traditional field scouting. With improvements to the broad-scale model component, the model synthesis approach and resulting tool could improve pest management strategy and provide a template for the development of spatially explicit pest pressure models. © 2014 Society of Chemical Industry.

  2. The German cervical cancer screening model: development and validation of a decision-analytic model for cervical cancer screening in Germany.

    PubMed

    Siebert, Uwe; Sroczynski, Gaby; Hillemanns, Peter; Engel, Jutta; Stabenow, Roland; Stegmaier, Christa; Voigt, Kerstin; Gibis, Bernhard; Hölzel, Dieter; Goldie, Sue J

    2006-04-01

    We sought to develop and validate a decision-analytic model for the natural history of cervical cancer for the German health care context and to apply it to cervical cancer screening. We developed a Markov model for the natural history of cervical cancer and cervical cancer screening in the German health care context. The model reflects current German practice standards for screening, diagnostic follow-up and treatment regarding cervical cancer and its precursors. Data for disease progression and cervical cancer survival were obtained from the literature and German cancer registries. Accuracy of Papanicolaou (Pap) testing was based on meta-analyses. We performed internal and external model validation using observed epidemiological data for unscreened women from different German cancer registries. The model predicts life expectancy, incidence of detected cervical cancer cases, lifetime cervical cancer risks and mortality. The model predicted a lifetime cervical cancer risk of 3.0% and a lifetime cervical cancer mortality of 1.0%, with a peak cancer incidence of 84/100,000 at age 51 years. These results were similar to observed data from German cancer registries, German literature data and results from other international models. Based on our model, annual Pap screening could prevent 98.7% of diagnosed cancer cases and 99.6% of deaths due to cervical cancer in women completely adherent to screening and compliant to treatment. Extending the screening interval from 1 year to 2, 3 or 5 years resulted in reduced screening effectiveness. This model provides a tool for evaluating the long-term effectiveness of different cervical cancer screening tests and strategies.

  3. Physiologically-based, predictive analytics using the heart-rate-to-Systolic-Ratio significantly improves the timeliness and accuracy of sepsis prediction compared to SIRS.

    PubMed

    Danner, Omar K; Hendren, Sandra; Santiago, Ethel; Nye, Brittany; Abraham, Prasad

    2017-04-01

    Enhancing the efficiency of diagnosis and treatment of severe sepsis by using physiologically-based, predictive analytical strategies has not been fully explored. We hypothesize assessment of heart-rate-to-systolic-ratio significantly increases the timeliness and accuracy of sepsis prediction after emergency department (ED) presentation. We evaluated the records of 53,313 ED patients from a large, urban teaching hospital between January and June 2015. The HR-to-systolic ratio was compared to SIRS criteria for sepsis prediction. There were 884 patients with discharge diagnoses of sepsis, severe sepsis, and/or septic shock. Variations in three presenting variables, heart rate, systolic BP and temperature were determined to be primary early predictors of sepsis with a 74% (654/884) accuracy compared to 34% (304/884) using SIRS criteria (p < 0.0001)in confirmed septic patients. Physiologically-based predictive analytics improved the accuracy and expediency of sepsis identification via detection of variations in HR-to-systolic ratio. This approach may lead to earlier sepsis workup and life-saving interventions. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Genomic prediction in animals and plants: simulation of data, validation, reporting, and benchmarking.

    PubMed

    Daetwyler, Hans D; Calus, Mario P L; Pong-Wong, Ricardo; de Los Campos, Gustavo; Hickey, John M

    2013-02-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits

  5. Genomic Prediction in Animals and Plants: Simulation of Data, Validation, Reporting, and Benchmarking

    PubMed Central

    Daetwyler, Hans D.; Calus, Mario P. L.; Pong-Wong, Ricardo; de los Campos, Gustavo; Hickey, John M.

    2013-01-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits

  6. Fluid mechanics of dynamic stall. II - Prediction of full scale characteristics

    NASA Technical Reports Server (NTRS)

    Ericsson, L. E.; Reding, J. P.

    1988-01-01

    Analytical extrapolations are made from experimental subscale dynamics to predict full scale characteristics of dynamic stall. The method proceeds by establishing analytic relationships between dynamic and static aerodynamic characteristics induced by viscous flow effects. The method is then validated by predicting dynamic test results on the basis of corresponding static test data obtained at the same subscale flow conditions, and the effect of Reynolds number on the static aerodynamic characteristics are determined from subscale to full scale flow conditions.

  7. Validation of selected analytical methods using accuracy profiles to assess the impact of a Tobacco Heating System on indoor air quality.

    PubMed

    Mottier, Nicolas; Tharin, Manuel; Cluse, Camille; Crudo, Jean-René; Lueso, María Gómez; Goujon-Ginglinger, Catherine G; Jaquier, Anne; Mitova, Maya I; Rouget, Emmanuel G R; Schaller, Mathieu; Solioz, Jennifer

    2016-09-01

    Studies in environmentally controlled rooms have been used over the years to assess the impact of environmental tobacco smoke on indoor air quality. As new tobacco products are developed, it is important to determine their impact on air quality when used indoors. Before such an assessment can take place it is essential that the analytical methods used to assess indoor air quality are validated and shown to be fit for their intended purpose. Consequently, for this assessment, an environmentally controlled room was built and seven analytical methods, representing eighteen analytes, were validated. The validations were carried out with smoking machines using a matrix-based approach applying the accuracy profile procedure. The performances of the methods were compared for all three matrices under investigation: background air samples, the environmental aerosol of Tobacco Heating System THS 2.2, a heat-not-burn tobacco product developed by Philip Morris International, and the environmental tobacco smoke of a cigarette. The environmental aerosol generated by the THS 2.2 device did not have any appreciable impact on the performances of the methods. The comparison between the background and THS 2.2 environmental aerosol samples generated by smoking machines showed that only five compounds were higher when THS 2.2 was used in the environmentally controlled room. Regarding environmental tobacco smoke from cigarettes, the yields of all analytes were clearly above those obtained with the other two air sample types. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Validating Inertial Confinement Fusion (ICF) predictive capability using perturbed capsules

    NASA Astrophysics Data System (ADS)

    Schmitt, Mark; Magelssen, Glenn; Tregillis, Ian; Hsu, Scott; Bradley, Paul; Dodd, Evan; Cobble, James; Flippo, Kirk; Offerman, Dustin; Obrey, Kimberly; Wang, Yi-Ming; Watt, Robert; Wilke, Mark; Wysocki, Frederick; Batha, Steven

    2009-11-01

    Achieving ignition on NIF is a monumental step on the path toward utilizing fusion as a controlled energy source. Obtaining robust ignition requires accurate ICF models to predict the degradation of ignition caused by heterogeneities in capsule construction and irradiation. LANL has embarked on a project to induce controlled defects in capsules to validate our ability to predict their effects on fusion burn. These efforts include the validation of feature-driven hydrodynamics and mix in a convergent geometry. This capability is needed to determine the performance of capsules imploded under less-than-optimum conditions on future IFE facilities. LANL's recently initiated Defect Implosion Experiments (DIME) conducted at Rochester's Omega facility are providing input for these efforts. Recent simulation and experimental results will be shown.

  9. Predictive Validity of Curriculum-Based Measures for English Learners at Varying English Proficiency Levels

    ERIC Educational Resources Information Center

    Kim, Jennifer Sun; Vanderwood, Michael L.; Lee, Catherine Y.

    2016-01-01

    This study examined the predictive validity of curriculum-based measures in reading for Spanish-speaking English learners (ELs) at various levels of English proficiency. Third-grade Spanish-speaking EL students were screened during the fall using DIBELS Oral Reading Fluency (DORF) and Daze. Predictive validity was examined in relation to spring…

  10. Biological and analytical stability of a peripheral blood gene expression score for obstructive coronary artery disease in the PREDICT and COMPASS studies.

    PubMed

    Daniels, Susan E; Beineke, Philip; Rhees, Brian; McPherson, John A; Kraus, William E; Thomas, Gregory S; Rosenberg, Steven

    2014-10-01

    A gene expression score (GES) for obstructive coronary artery disease (CAD) has been validated in two multicenter studies. Receiver-operating characteristics (ROC) analysis of the GES on an expanded Personalized Risk Evaluation and Diagnosis in the Coronary Tree (PREDICT) cohort (NCT no. 00500617) with CAD defined by quantitative coronary angiography (QCA) or clinical reads yielded similar performance (area under the curve (AUC)=0.70, N=1,502) to the original validation cohort (AUC=0.70, N=526). Analysis of 138 non-Caucasian and 1,364 Caucasian patients showed very similar performance (AUCs=0.72 vs. 0.70). To assess analytic stability, stored samples of the original validation cohort (N=526) was re-tested after 5 years, and the mean score changed from 20.3 to 19.8 after 5 years (N=501, 95 %). To assess patient scores over time, GES was determined on samples from 173 Coronary Obstruction Detection by Molecular Personalized Gene Expression (COMPASS) study (NCT no. 01117506) patients at approximately 1 year post-enrollment. Mean scores increased slightly from 15.9 to 17.3, corresponding to a 2.5 % increase in obstructive CAD likelihood. Changes in cardiovascular medications did not show a significant change in GES.

  11. Modelling by partial least squares the relationship between the HPLC mobile phases and analytes on phenyl column.

    PubMed

    Markopoulou, Catherine K; Kouskoura, Maria G; Koundourellis, John E

    2011-06-01

    Twenty-five descriptors and 61 structurally different analytes have been used on a partial least squares (PLS) to latent structure technique in order to study chromatographically their interaction mechanism on a phenyl column. According to the model, 240 different retention times of the analytes, expressed as Y variable (log k), at different % MeOH mobile-phase concentrations have been correlated with their theoretical most important structural or molecular descriptors. The goodness-of-fit was estimated by the coefficient of multiple determinations r(2) (0.919), and the root mean square error of estimation (RMSEE=0.1283) values with a predictive ability (Q(2)) of 0.901. The model was further validated using cross-validation (CV), validated by 20 response permutations r(2) (0.0, 0.0146), Q(2) (0.0, -0.136) and validated by external prediction. The contribution of certain mechanism interactions between the analytes, the mobile phase and the column, proportional or counterbalancing is also studied. Trying to evaluate the influence on Y of every variable in a PLS model, VIP (variables importance in the projection) plot provides evidence that lipophilicity (expressed as Log D, Log P), polarizability, refractivity and the eluting power of the mobile phase are dominant in the retention mechanism on a phenyl column. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Predictive validity of the Biomedical Admissions Test: an evaluation and case study.

    PubMed

    McManus, I C; Ferguson, Eamonn; Wakeford, Richard; Powis, David; James, David

    2011-01-01

    There has been an increase in the use of pre-admission selection tests for medicine. Such tests need to show good psychometric properties. Here, we use a paper by Emery and Bell [2009. The predictive validity of the Biomedical Admissions Test for pre-clinical examination performance. Med Educ 43:557-564] as a case study to evaluate and comment on the reporting of psychometric data in the field of medical student selection (and the comments apply to many papers in the field). We highlight pitfalls when reliability data are not presented, how simple zero-order associations can lead to inaccurate conclusions about the predictive validity of a test, and how biases need to be explored and reported. We show with BMAT that it is the knowledge part of the test which does all the predictive work. We show that without evidence of incremental validity it is difficult to assess the value of any selection tests for medicine.

  13. Analytical method validation to evaluate dithiocarbamates degradation in biobeds in South of Brazil.

    PubMed

    Vareli, Catiucia S; Pizzutti, Ionara R; Gebler, Luciano; Cardoso, Carmem D; Gai, Daniela S H; Fontana, Marlos E Z

    2018-07-01

    In order to evaluate the efficiency of biobeds on DTC degradation, the aim of this study was to apply, optimize and validate a method to determine dithiocarbamate (mancozeb) in biobeds using gas chromatography-tandem mass spectrometry (GC-MS). The DTC pesticide mancozeb was hydrolysed in a tin (II) chloride solution at 1.5% in HCl (4 mol L -1 ), during 1 h in a water bath at 80 °C, and the CS 2 formed was extracted in isooctane. After cooling, 1 mL of the organic layer was transferred to an auto sampler vial and analyzed by GC-MS. A complete validation study was performed and the following parameters were assessed: linearity of the analytical curve (r 2 ), estimated method and instrument limits of detection and limits of quantification (LODm, LODi, LOQm and LOQi, respectively), accuracy (recovery%), precision (RSD%) and matrix effects. Recovery experiments were carried out with a standard spiking solution of the DTC pesticide thiram. Blank biobed (biomixture) samples were spiked at the three levels corresponding to the CS 2 concentrations of 1, 3 and 5 mg kg -1 , with seven replicates each (n = 7). The method presented satisfactory accuracy, with recoveries within the range of 89-96% and RSD ≤ 11%. The analytical curves were linear in the concentration range of 0.05-10 µg CS 2 mL -1 (r 2 > 0.9946). LODm and LOQm were 0.1 and 0.5 mg CS 2 kg -1 , respectively, and the calculated matrix effects were not significant (≤ 20%). The validated method was applied to 80 samples (biomixture), from sixteen different biobeds (collected at five sampling times) during fourteen months. Ten percent of samples presented CS 2 concentration below the LOD (0.1 mg CS 2 kg -1 ) and 49% of them showed results below the LOQ (0.5 mg CS 2 kg -1 ), which demonstrates the biobeds capability to degrade DTC. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Development, validation and evaluation of an analytical method for the determination of monomeric and oligomeric procyanidins in apple extracts.

    PubMed

    Hollands, Wendy J; Voorspoels, Stefan; Jacobs, Griet; Aaby, Kjersti; Meisland, Ane; Garcia-Villalba, Rocio; Tomas-Barberan, Francisco; Piskula, Mariusz K; Mawson, Deborah; Vovk, Irena; Needs, Paul W; Kroon, Paul A

    2017-04-28

    There is a lack of data for individual oligomeric procyanidins in apples and apple extracts. Our aim was to develop, validate and evaluate an analytical method for the separation, identification and quantification of monomeric and oligomeric flavanols in apple extracts. To achieve this, we prepared two types of flavanol extracts from freeze-dried apples; one was an epicatechin-rich extract containing ∼30% (w/w) monomeric (-)-epicatechin which also contained oligomeric procyanidins (Extract A), the second was an oligomeric procyanidin-rich extract depleted of epicatechin (Extract B). The parameters considered for method optimisation were HPLC columns and conditions, sample heating, mass of extract and dilution volumes. The performance characteristics considered for method validation included standard linearity, method sensitivity, precision and trueness. Eight laboratories participated in the method evaluation. Chromatographic separation of the analytes was best achieved utilizing a Hilic column with a binary mobile phase consisting of acidic acetonitrile and acidic aqueous methanol. The final method showed linearity for epicatechin in the range 5-100μg/mL with a correlation co-efficient >0.999. Intra-day and inter-day precision of the analytes ranged from 2 to 6% and 2 to 13% respectively. Up to dp3, trueness of the method was >95% but decreased with increasing dp. Within laboratory precision showed RSD values <5 and 10% for monomers and oligomers, respectively. Between laboratory precision was 4 and 15% (Extract A) and 7 and 30% (Extract B) for monomers and oligomers, respectively. An analytical method for the separation, identification and quantification of procyanidins in an apple extract was developed, validated and assessed. The results of the inter-laboratory evaluation indicate that the method is reliable and reproducible. Copyright © 2017. Published by Elsevier B.V.

  15. Experimental, Numerical, and Analytical Slosh Dynamics of Water and Liquid Nitrogen in a Spherical Tank

    NASA Technical Reports Server (NTRS)

    Storey, Jedediah Morse

    2016-01-01

    Understanding, predicting, and controlling fluid slosh dynamics is critical to safety and improving performance of space missions when a significant percentage of the spacecraft's mass is a liquid. Computational fluid dynamics simulations can be used to predict the dynamics of slosh, but these programs require extensive validation. Many experimental and numerical studies of water slosh have been conducted. However, slosh data for cryogenic liquids is lacking. Water and cryogenic liquid nitrogen are used in various ground-based tests with a spherical tank to characterize damping, slosh mode frequencies, and slosh forces. A single ring baffle is installed in the tank for some of the tests. Analytical models for slosh modes, slosh forces, and baffle damping are constructed based on prior work. Select experiments are simulated using a commercial CFD software, and the numerical results are compared to the analytical and experimental results for the purposes of validation and methodology-improvement.

  16. Why does self-reported emotional intelligence predict job performance? A meta-analytic investigation of mixed EI.

    PubMed

    Joseph, Dana L; Jin, Jing; Newman, Daniel A; O'Boyle, Ernest H

    2015-03-01

    Recent empirical reviews have claimed a surprisingly strong relationship between job performance and self-reported emotional intelligence (also commonly called trait EI or mixed EI), suggesting self-reported/mixed EI is one of the best known predictors of job performance (e.g., ρ = .47; Joseph & Newman, 2010b). Results further suggest mixed EI can robustly predict job performance beyond cognitive ability and Big Five personality traits (Joseph & Newman, 2010b; O'Boyle, Humphrey, Pollack, Hawver, & Story, 2011). These criterion-related validity results are problematic, given the paucity of evidence and the questionable construct validity of mixed EI measures themselves. In the current research, we update and reevaluate existing evidence for mixed EI, in light of prior work regarding the content of mixed EI measures. Results of the current meta-analysis demonstrate that (a) the content of mixed EI measures strongly overlaps with a set of well-known psychological constructs (i.e., ability EI, self-efficacy, and self-rated performance, in addition to Conscientiousness, Emotional Stability, Extraversion, and general mental ability; multiple R = .79), (b) an updated estimate of the meta-analytic correlation between mixed EI and supervisor-rated job performance is ρ = .29, and (c) the mixed EI-job performance relationship becomes nil (β = -.02) after controlling for the set of covariates listed above. Findings help to establish the construct validity of mixed EI measures and further support an intuitive theoretical explanation for the uncommonly high association between mixed EI and job performance--mixed EI instruments assess a combination of ability EI and self-perceptions, in addition to personality and cognitive ability. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  17. Predictive validity of pre-admission assessments on medical student performance

    PubMed Central

    Dabaliz, Al-Awwab; Kaadan, Samy; Dabbagh, M. Marwan; Barakat, Abdulaziz; Shareef, Mohammad Abrar; Al-Tannir, Mohamad; Obeidat, Akef

    2017-01-01

    Objectives To examine the predictive validity of pre-admission variables on students’ performance in a medical school in Saudi Arabia.  Methods In this retrospective study, we collected admission and college performance data for 737 students in preclinical and clinical years. Data included high school scores and other standardized test scores, such as those of the National Achievement Test and the General Aptitude Test. Additionally, we included the scores of the Test of English as a Foreign Language (TOEFL) and the International English Language Testing System (IELTS) exams. Those datasets were then compared with college performance indicators, namely the cumulative Grade Point Average (cGPA) and progress test, using multivariate linear regression analysis. Results In preclinical years, both the National Achievement Test (p=0.04, B=0.08) and TOEFL (p=0.017, B=0.01) scores were positive predictors of cGPA, whereas the General Aptitude Test (p=0.048, B=-0.05) negatively predicted cGPA. Moreover, none of the pre-admission variables were predictive of progress test performance in the same group. On the other hand, none of the pre-admission variables were predictive of cGPA in clinical years. Overall, cGPA strongly predict-ed students’ progress test performance (p<0.001 and B=19.02). Conclusions Only the National Achievement Test and TOEFL significantly predicted performance in preclinical years. However, these variables do not predict progress test performance, meaning that they do not predict the functional knowledge reflected in the progress test. We report various strengths and deficiencies in the current medical college admission criteria, and call for employing more sensitive and valid ones that predict student performance and functional knowledge, especially in the clinical years. PMID:29176032

  18. Acoustic Predictions of Manned and Unmanned Rotorcraft Using the Comprehensive Analytical Rotorcraft Model for Acoustics (CARMA) Code System

    NASA Technical Reports Server (NTRS)

    Boyd, D. Douglas, Jr.; Burley, Casey L.; Conner, David A.

    2005-01-01

    The Comprehensive Analytical Rotorcraft Model for Acoustics (CARMA) is being developed under the Quiet Aircraft Technology Project within the NASA Vehicle Systems Program. The purpose of CARMA is to provide analysis tools for the design and evaluation of efficient low-noise rotorcraft, as well as support the development of safe, low-noise flight operations. The baseline prediction system of CARMA is presented and current capabilities are illustrated for a model rotor in a wind tunnel, a rotorcraft in flight and for a notional coaxial rotor configuration; however, a complete validation of the CARMA system capabilities with respect to a variety of measured databases is beyond the scope of this work. For the model rotor illustration, predicted rotor airloads and acoustics for a BO-105 model rotor are compared to test data from HART-II. For the flight illustration, acoustic data from an MD-520N helicopter flight test, which was conducted at Eglin Air Force Base in September 2003, are compared with CARMA full vehicle flight predictions. Predicted acoustic metrics at three microphone locations are compared for limited level flight and descent conditions. Initial acoustic predictions using CARMA for a notional coaxial rotor system are made. The effect of increasing the vertical separation between the rotors on the predicted airloads and acoustic results are shown for both aerodynamically non-interacting and aerodynamically interacting rotors. The sensitivity of including the aerodynamic interaction effects of each rotor on the other, especially when the rotors are in close proximity to one another is initially examined. The predicted coaxial rotor noise is compared to that of a conventional single rotor system of equal thrust, where both are of reasonable size for an unmanned aerial vehicle (UAV).

  19. Preventing patient absenteeism: validation of a predictive overbooking model.

    PubMed

    Reid, Mark W; Cohen, Samuel; Wang, Hank; Kaung, Aung; Patel, Anish; Tashjian, Vartan; Williams, Demetrius L; Martinez, Bibiana; Spiegel, Brennan M R

    2015-12-01

    To develop a model that identifies patients at high risk for missing scheduled appointments ("no-shows" and cancellations) and to project the impact of predictive overbooking in a gastrointestinal endoscopy clinic-an exemplar resource-intensive environment with a high no-show rate. We retrospectively developed an algorithm that uses electronic health record (EHR) data to identify patients who do not show up to their appointments. Next, we prospectively validated the algorithm at a Veterans Administration healthcare network clinic. We constructed a multivariable logistic regression model that assigned a no-show risk score optimized by receiver operating characteristic curve analysis. Based on these scores, we created a calendar of projected open slots to offer to patients and compared the daily performance of predictive overbooking with fixed overbooking and typical "1 patient, 1 slot" scheduling. Data from 1392 patients identified several predictors of no-show, including previous absenteeism, comorbid disease burden, and current diagnoses of mood and substance use disorders. The model correctly classified most patients during the development (area under the curve [AUC] = 0.80) and validation phases (AUC = 0.75). Prospective testing in 1197 patients found that predictive overbooking averaged 0.51 unused appointments per day versus 6.18 for typical booking (difference = -5.67; 95% CI, -6.48 to -4.87; P < .0001). Predictive overbooking could have increased service utilization from 62% to 97% of capacity, with only rare clinic overflows. Information from EHRs can accurately predict whether patients will no-show. This method can be used to overbook appointments, thereby maximizing service utilization while staying within clinic capacity.

  20. March 2017 Grenada Manufacturing, LLC Data Validation Reports and Analytical Laboratory Reports for the Main Plant Building Vapor Intrusion Sampling

    EPA Pesticide Factsheets

    Data Validation Reports and Full Analytical Lab Reports for Indoor Air, Ambient Air and Sub-slab samples taken during the facility vapor intrusion investigation in March 2017 at the Grenada Manufacturing plant

  1. A Response to "Measuring Students' Writing Ability on a Computer Analytic Developmental Scale: An Exploratory Validity Study"

    ERIC Educational Resources Information Center

    Reutzel, D. Ray; Mohr, Kathleen A. J.

    2014-01-01

    In this response to "Measuring Students' Writing Ability on a Computer Analytic Developmental Scale: An Exploratory Validity Study," the authors agree that assessments should seek parsimony in both theory and application wherever possible. Doing so allows maximal dissemination and implementation while minimizing costs. The Writing…

  2. Differential Predictive Validity of a Preschool Battery Across Race and Sex.

    ERIC Educational Resources Information Center

    Reynolds, Cecil R.

    Determination of the fairness of preschool tests for use with children of varying cultural backgrounds is the major objective of this study. The predictive validity of a battery of preschool tests, chosen to represent the core areas of preschool assessment, across race and sex, was evaluated. Validity of the battery was examined over a 12-month…

  3. The predictive validity of ideal partner preferences: a review and meta-analysis.

    PubMed

    Eastwick, Paul W; Luchies, Laura B; Finkel, Eli J; Hunt, Lucy L

    2014-05-01

    A central element of interdependence theory is that people have standards against which they compare their current outcomes, and one ubiquitous standard in the mating domain is the preference for particular attributes in a partner (ideal partner preferences). This article reviews research on the predictive validity of ideal partner preferences and presents a new integrative model that highlights when and why ideals succeed or fail to predict relational outcomes. Section 1 examines predictive validity by reviewing research on sex differences in the preference for physical attractiveness and earning prospects. Men and women reliably differ in the extent to which these qualities affect their romantic evaluations of hypothetical targets. Yet a new meta-analysis spanning the attraction and relationships literatures (k = 97) revealed that physical attractiveness predicted romantic evaluations with a moderate-to-strong effect size (r = ∼.40) for both sexes, and earning prospects predicted romantic evaluations with a small effect size (r = ∼.10) for both sexes. Sex differences in the correlations were small (r difference = .03) and uniformly nonsignificant. Section 2 reviews research on individual differences in ideal partner preferences, drawing from several theoretical traditions to explain why ideals predict relational evaluations at different relationship stages. Furthermore, this literature also identifies alternative measures of ideal partner preferences that have stronger predictive validity in certain theoretically sensible contexts. Finally, a discussion highlights a new framework for conceptualizing the appeal of traits, the difference between live and hypothetical interactions, and the productive interplay between mating research and broader psychological theories.

  4. Prediction of down-gradient impacts of DNAPL source depletion using tracer techniques: Laboratory and modeling validation

    NASA Astrophysics Data System (ADS)

    Jawitz, J. W.; Basu, N.; Chen, X.

    2007-05-01

    Interwell application of coupled nonreactive and reactive tracers through aquifer contaminant source zones enables quantitative characterization of aquifer heterogeneity and contaminant architecture. Parameters obtained from tracer tests are presented here in a Lagrangian framework that can be used to predict the dissolution of nonaqueous phase liquid (NAPL) contaminants. Nonreactive tracers are commonly used to provide information about travel time distributions in hydrologic systems. Reactive tracers have more recently been introduced as a tool to quantify the amount of NAPL contaminant present within the tracer swept volume. Our group has extended reactive tracer techniques to also characterize NAPL spatial distribution heterogeneity. By conceptualizing the flow field through an aquifer as a collection of streamtubes, the aquifer hydrodynamic heterogeneities may be characterized by a nonreactive tracer travel time distribution, and NAPL spatial distribution heterogeneity may be similarly described using reactive travel time distributions. The combined statistics of these distributions are used to derive a simple analytical solution for contaminant dissolution. This analytical solution, and the tracer techniques used for its parameterization, were validated both numerically and experimentally. Illustrative applications are presented from numerical simulations using the multiphase flow and transport simulator UTCHEM, and laboratory experiments of surfactant-enhanced NAPL remediation in two-dimensional flow chambers.

  5. Modern modeling techniques had limited external validity in predicting mortality from traumatic brain injury.

    PubMed

    van der Ploeg, Tjeerd; Nieboer, Daan; Steyerberg, Ewout W

    2016-10-01

    Prediction of medical outcomes may potentially benefit from using modern statistical modeling techniques. We aimed to externally validate modeling strategies for prediction of 6-month mortality of patients suffering from traumatic brain injury (TBI) with predictor sets of increasing complexity. We analyzed individual patient data from 15 different studies including 11,026 TBI patients. We consecutively considered a core set of predictors (age, motor score, and pupillary reactivity), an extended set with computed tomography scan characteristics, and a further extension with two laboratory measurements (glucose and hemoglobin). With each of these sets, we predicted 6-month mortality using default settings with five statistical modeling techniques: logistic regression (LR), classification and regression trees, random forests (RFs), support vector machines (SVM) and neural nets. For external validation, a model developed on one of the 15 data sets was applied to each of the 14 remaining sets. This process was repeated 15 times for a total of 630 validations. The area under the receiver operating characteristic curve (AUC) was used to assess the discriminative ability of the models. For the most complex predictor set, the LR models performed best (median validated AUC value, 0.757), followed by RF and support vector machine models (median validated AUC value, 0.735 and 0.732, respectively). With each predictor set, the classification and regression trees models showed poor performance (median validated AUC value, <0.7). The variability in performance across the studies was smallest for the RF- and LR-based models (inter quartile range for validated AUC values from 0.07 to 0.10). In the area of predicting mortality from TBI, nonlinear and nonadditive effects are not pronounced enough to make modern prediction methods beneficial. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Validation of Fatigue Modeling Predictions in Aviation Operations

    NASA Technical Reports Server (NTRS)

    Gregory, Kevin; Martinez, Siera; Flynn-Evans, Erin

    2017-01-01

    Bio-mathematical fatigue models that predict levels of alertness and performance are one potential tool for use within integrated fatigue risk management approaches. A number of models have been developed that provide predictions based on acute and chronic sleep loss, circadian desynchronization, and sleep inertia. Some are publicly available and gaining traction in settings such as commercial aviation as a means of evaluating flight crew schedules for potential fatigue-related risks. Yet, most models have not been rigorously evaluated and independently validated for the operations to which they are being applied and many users are not fully aware of the limitations in which model results should be interpreted and applied.

  7. The Impact of Proactive Student-Success Coaching Using Predictive Analytics on Community College Students

    ERIC Educational Resources Information Center

    Hall, Mark Monroe

    2017-01-01

    The purpose of this study was to examine the effects of proactive student-success coaching, informed by predictive analytics, on student academic performance and persistence. Specifically, semester GPA and semester-to-semester student persistence were the investigated outcomes. Uniquely, the community college focused the intervention on only…

  8. Predicting the ungauged basin: model validation and realism assessment

    NASA Astrophysics Data System (ADS)

    van Emmerik, Tim; Mulder, Gert; Eilander, Dirk; Piet, Marijn; Savenije, Hubert

    2016-04-01

    The hydrological decade on Predictions in Ungauged Basins (PUB) [1] led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of model outcome has not been discussed to a great extent. With this study [2] we aim to contribute to the discussion on how one can determine the value and validity of a hydrological model developed for an ungauged basin. As in many cases no local, or even regional, data are available, alternative methods should be applied. Using a PUB case study in a genuinely ungauged basin in southern Cambodia, we give several examples of how one can use different types of soft data to improve model design, calibrate and validate the model, and assess the realism of the model output. A rainfall-runoff model was coupled to an irrigation reservoir, allowing the use of additional and unconventional data. The model was mainly forced with remote sensing data, and local knowledge was used to constrain the parameters. Model realism assessment was done using data from surveys. This resulted in a successful reconstruction of the reservoir dynamics, and revealed the different hydrological characteristics of the two topographical classes. We do not present a generic approach that can be transferred to other ungauged catchments, but we aim to show how clever model design and alternative data acquisition can result in a valuable hydrological model for ungauged catchments. [1] Sivapalan, M., Takeuchi, K., Franks, S., Gupta, V., Karambiri, H., Lakshmi, V., et al. (2003). IAHS decade on predictions in ungauged basins (PUB), 2003-2012: shaping an exciting future for the hydrological sciences. Hydrol. Sci. J. 48, 857-880. doi: 10.1623/hysj.48.6.857.51421 [2] van Emmerik, T., Mulder, G., Eilander, D., Piet, M. and Savenije, H. (2015). Predicting the ungauged basin: model validation and realism assessment

  9. Extension and validation of an analytical model for in vivo PET verification of proton therapy—a phantom and clinical study

    NASA Astrophysics Data System (ADS)

    Attanasi, F.; Knopf, A.; Parodi, K.; Paganetti, H.; Bortfeld, T.; Rosso, V.; Del Guerra, A.

    2011-08-01

    The interest in positron emission tomography (PET) as a tool for treatment verification in proton therapy has become widespread in recent years, and several research groups worldwide are currently investigating the clinical implementation. After the first off-line investigation with a PET/CT scanner at MGH (Boston, USA), attention is now focused on an in-room PET application immediately after treatment in order to also detect shorter-lived isotopes, such as O15 and N13, minimizing isotope washout and avoiding patient repositioning errors. Clinical trials are being conducted by means of commercially available PET systems, and other tests are planned using application-dedicated tomographs. Parallel to the experimental investigation and new hardware development, great interest has been shown in the development of fast procedures to provide feedback regarding the delivered dose from reconstructed PET images. Since the thresholds of inelastic nuclear reactions leading to tissue β+-activation fall within the energy range of 15-20 MeV, the distal activity fall-off is correlated, but not directly matched, to the distal fall-off of the dose distribution. Moreover, the physical interactions leading to β+-activation and energy deposition are of a different nature. All these facts make it essential to further develop accurate and fast methodologies capable of predicting, on the basis of the planned dose distribution, expected PET images to be compared with actual PET measurements, thus providing clinical feedback on the correctness of the dose delivery and of the irradiation field position. The aim of this study has been to validate an analytical model and to implement and evaluate it in a fast and flexible framework able to locally predict such activity distributions directly taking the reference planning CT and planned dose as inputs. The results achieved in this study for phantoms and clinical cases highlighted the potential of the implemented method to predict expected

  10. Consensus Statement on Electronic Health Predictive Analytics: A Guiding Framework to Address Challenges

    PubMed Central

    Amarasingham, Ruben; Audet, Anne-Marie J.; Bates, David W.; Glenn Cohen, I.; Entwistle, Martin; Escobar, G. J.; Liu, Vincent; Etheredge, Lynn; Lo, Bernard; Ohno-Machado, Lucila; Ram, Sudha; Saria, Suchi; Schilling, Lisa M.; Shahi, Anand; Stewart, Walter F.; Steyerberg, Ewout W.; Xie, Bin

    2016-01-01

    Context: The recent explosion in available electronic health record (EHR) data is motivating a rapid expansion of electronic health care predictive analytic (e-HPA) applications, defined as the use of electronic algorithms that forecast clinical events in real time with the intent to improve patient outcomes and reduce costs. There is an urgent need for a systematic framework to guide the development and application of e-HPA to ensure that the field develops in a scientifically sound, ethical, and efficient manner. Objectives: Building upon earlier frameworks of model development and utilization, we identify the emerging opportunities and challenges of e-HPA, propose a framework that enables us to realize these opportunities, address these challenges, and motivate e-HPA stakeholders to both adopt and continuously refine the framework as the applications of e-HPA emerge. Methods: To achieve these objectives, 17 experts with diverse expertise including methodology, ethics, legal, regulation, and health care delivery systems were assembled to identify emerging opportunities and challenges of e-HPA and to propose a framework to guide the development and application of e-HPA. Findings: The framework proposed by the panel includes three key domains where e-HPA differs qualitatively from earlier generations of models and algorithms (Data Barriers, Transparency, and Ethics) and areas where current frameworks are insufficient to address the emerging opportunities and challenges of e-HPA (Regulation and Certification; and Education and Training). The following list of recommendations summarizes the key points of the framework: Data Barriers: Establish mechanisms within the scientific community to support data sharing for predictive model development and testing.Transparency: Set standards around e-HPA validation based on principles of scientific transparency and reproducibility.Ethics: Develop both individual-centered and society-centered risk-benefit approaches to evaluate

  11. Consensus Statement on Electronic Health Predictive Analytics: A Guiding Framework to Address Challenges.

    PubMed

    Amarasingham, Ruben; Audet, Anne-Marie J; Bates, David W; Glenn Cohen, I; Entwistle, Martin; Escobar, G J; Liu, Vincent; Etheredge, Lynn; Lo, Bernard; Ohno-Machado, Lucila; Ram, Sudha; Saria, Suchi; Schilling, Lisa M; Shahi, Anand; Stewart, Walter F; Steyerberg, Ewout W; Xie, Bin

    2016-01-01

    The recent explosion in available electronic health record (EHR) data is motivating a rapid expansion of electronic health care predictive analytic (e-HPA) applications, defined as the use of electronic algorithms that forecast clinical events in real time with the intent to improve patient outcomes and reduce costs. There is an urgent need for a systematic framework to guide the development and application of e-HPA to ensure that the field develops in a scientifically sound, ethical, and efficient manner. Building upon earlier frameworks of model development and utilization, we identify the emerging opportunities and challenges of e-HPA, propose a framework that enables us to realize these opportunities, address these challenges, and motivate e-HPA stakeholders to both adopt and continuously refine the framework as the applications of e-HPA emerge. To achieve these objectives, 17 experts with diverse expertise including methodology, ethics, legal, regulation, and health care delivery systems were assembled to identify emerging opportunities and challenges of e-HPA and to propose a framework to guide the development and application of e-HPA. The framework proposed by the panel includes three key domains where e-HPA differs qualitatively from earlier generations of models and algorithms (Data Barriers, Transparency, and ETHICS) and areas where current frameworks are insufficient to address the emerging opportunities and challenges of e-HPA (Regulation and Certification; and Education and Training). The following list of recommendations summarizes the key points of the framework: Data Barriers: Establish mechanisms within the scientific community to support data sharing for predictive model development and testing.Transparency: Set standards around e-HPA validation based on principles of scientific transparency and reproducibility. Develop both individual-centered and society-centered risk-benefit approaches to evaluate e-HPA.Regulation and Certification: Construct a

  12. A confirmatory factor analytic validation of the Tinnitus Handicap Inventory.

    PubMed

    Kleinstäuber, Maria; Frank, Ina; Weise, Cornelia

    2015-03-01

    Because the postulated three-factor structure of the internationally widely used Tinnitus Handicap Inventory (THI) has not been confirmed yet by a confirmatory factor analytic approach this was the central aim of the current study. From a clinical setting, N=373 patients with chronic tinnitus completed the THI and further questionnaires assessing tinnitus-related and psychological variables. In order to analyze the psychometric properties of the THI, confirmatory factor analysis (CFA) and correlational analyses were conducted. CFA provided a statistically significant support for a better fit of the data to the hypothesized three-factor structure (RMSEA=.049, WRMR=1.062, CFI=.965, TLI=.961) than to a general factor model (RMSEA=.062, WRMR=1.258, CFI=.942, TLI=.937). The calculation of Cronbach's alpha as indicator of internal consistency revealed satisfactory values (.80-.91) with the exception of the catastrophic subscale (.65). High positive correlations of the THI and its subscales with other measures of tinnitus distress, anxiety, and depression, high negative correlations with tinnitus acceptance, moderate positive correlations with anxiety sensitivity, sleeping difficulties, tinnitus loudness, and small correlations with the Big Five personality dimensions confirmed construct validity. Results show that the THI is a highly reliable and valid measure of tinnitus-related handicap. In contrast to results of previous exploratory analyses the current findings speak for a three-factor in contrast to a unifactorial structure. Future research is needed to replicate this result in different tinnitus populations. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Does Psychopathy Predict Institutional Misconduct among Adults?: A Meta-Analytic Investigation

    ERIC Educational Resources Information Center

    Guy, Laura S.; Edens, John F.; Anthony, Christine; Douglas, Kevin S.

    2005-01-01

    Narrative reviews have raised several questions regarding the predictive validity of the Hare Psychopathy Checklist-Revised (PCL-R; R. D. Hare, 2003) and related scales in institutional settings. In this meta-analysis, the authors coded 273 effect sizes to investigate the association between the Hare scales and a hierarchy of increasingly specific…

  14. The Predictive Validity of the Minnesota Reading Assessment for Students in Postsecondary Vocational Education Programs.

    ERIC Educational Resources Information Center

    Brown, James M.; Chang, Gerald

    1982-01-01

    The predictive validity of the Minnesota Reading Assessment (MRA) when used to project potential performance of postsecondary vocational-technical education students was examined. Findings confirmed the MRA to be a valid predictor, although the error in prediction varied between the criterion variables. (Author/GK)

  15. Class-modelling in food analytical chemistry: Development, sampling, optimisation and validation issues - A tutorial.

    PubMed

    Oliveri, Paolo

    2017-08-22

    Qualitative data modelling is a fundamental branch of pattern recognition, with many applications in analytical chemistry, and embraces two main families: discriminant and class-modelling methods. The first strategy is appropriate when at least two classes are meaningfully defined in the problem under study, while the second strategy is the right choice when the focus is on a single class. For this reason, class-modelling methods are also referred to as one-class classifiers. Although, in the food analytical field, most of the issues would be properly addressed by class-modelling strategies, the use of such techniques is rather limited and, in many cases, discriminant methods are forcedly used for one-class problems, introducing a bias in the outcomes. Key aspects related to the development, optimisation and validation of suitable class models for the characterisation of food products are critically analysed and discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Review and evaluation of performance measures for survival prediction models in external validation settings.

    PubMed

    Rahman, M Shafiqur; Ambler, Gareth; Choodari-Oskooei, Babak; Omar, Rumana Z

    2017-04-18

    When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell's concordance measure which tended to increase as censoring increased. We recommend that Uno's concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller's measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston's D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive accuracy curves. In addition, we recommend to investigate the characteristics

  17. Acquaintance ratings of the Big Five personality traits: incremental validity beyond and interactive effects with self-reports in the prediction of workplace deviance.

    PubMed

    Kluemper, Donald H; McLarty, Benjamin D; Bing, Mark N

    2015-01-01

    It is widely established that the Big Five personality traits of conscientiousness, agreeableness, and emotional stability are antecedents to workplace deviance (Berry, Ones, & Sackett, 2007). However, these meta-analytic findings are based on self-reported personality traits. A recent meta-analysis by Oh, Wang, and Mount (2011) identified the value of acquaintance-reported personality in the prediction of job performance. The current investigation extends prior work by comparing the validities of self- and acquaintance-reported personality in the prediction of workplace deviance across 2 studies. We also hypothesized and tested an interactive, value-added integration of self- with acquaintance-reported personality using socioanalytic personality theory (R. T. Hogan, 1991). Both studies assessed self- and acquaintance-rated Big Five traits, along with supervisor-rated workplace deviance. However, the studies varied the measures of workplace deviance, and the 2nd study also included a self-rated workplace deviance criterion for additional comparison. Across both studies, the traits of conscientiousness and agreeableness were strong predictors of workplace deviance, and acquaintance-reported personality provided incremental validity beyond self-reports. Additionally, acquaintance-reported conscientiousness and agreeableness moderated the prediction of workplace deviance by interacting with the corresponding self-reported traits. Implications for personality theory and measurement are discussed along with applications for practice. (c) 2015 APA, all rights reserved.

  18. The predictive validity of safety climate.

    PubMed

    Johnson, Stephen E

    2007-01-01

    Safety professionals have increasingly turned their attention to social science for insight into the causation of industrial accidents. One social construct, safety climate, has been examined by several researchers [Cooper, M. D., & Phillips, R. A. (2004). Exploratory analysis of the safety climate and safety behavior relationship. Journal of Safety Research, 35(5), 497-512; Gillen, M., Baltz, D., Gassel, M., Kirsch, L., & Vacarro, D. (2002). Perceived safety climate, job Demands, and coworker support among union and nonunion injured construction workers. Journal of Safety Research, 33(1), 33-51; Neal, A., & Griffin, M. A. (2002). Safety climate and safety behaviour. Australian Journal of Management, 27, 66-76; Zohar, D. (2000). A group-level model of safety climate: Testing the effect of group climate on microaccidents in manufacturing jobs. Journal of Applied Psychology, 85(4), 587-596; Zohar, D., & Luria, G. (2005). A multilevel model of safety climate: Cross-level relationships between organization and group-level climates. Journal of Applied Psychology, 90(4), 616-628] who have documented its importance as a factor explaining the variation of safety-related outcomes (e.g., behavior, accidents). Researchers have developed instruments for measuring safety climate and have established some degree of psychometric reliability and validity. The problem, however, is that predictive validity has not been firmly established, which reduces the credibility of safety climate as a meaningful social construct. The research described in this article addresses this problem and provides additional support for safety climate as a viable construct and as a predictive indicator of safety-related outcomes. This study used 292 employees at three locations of a heavy manufacturing organization to complete the 16 item Zohar Safety Climate Questionnaire (ZSCQ) [Zohar, D., & Luria, G. (2005). A multilevel model of safety climate: Cross-level relationships between organization and group

  19. Target analyte quantification by isotope dilution LC-MS/MS directly referring to internal standard concentrations--validation for serum cortisol measurement.

    PubMed

    Maier, Barbara; Vogeser, Michael

    2013-04-01

    Isotope dilution LC-MS/MS methods used in the clinical laboratory typically involve multi-point external calibration in each analytical series. Our aim was to test the hypothesis that determination of target analyte concentrations directly derived from the relation of the target analyte peak area to the peak area of a corresponding stable isotope labelled internal standard compound [direct isotope dilution analysis (DIDA)] may be not inferior to conventional external calibration with respect to accuracy and reproducibility. Quality control samples and human serum pools were analysed in a comparative validation protocol for cortisol as an exemplary analyte by LC-MS/MS. Accuracy and reproducibility were compared between quantification either involving a six-point external calibration function, or a result calculation merely based on peak area ratios of unlabelled and labelled analyte. Both quantification approaches resulted in similar accuracy and reproducibility. For specified analytes, reliable analyte quantification directly derived from the ratio of peak areas of labelled and unlabelled analyte without the need for a time consuming multi-point calibration series is possible. This DIDA approach is of considerable practical importance for the application of LC-MS/MS in the clinical laboratory where short turnaround times often have high priority.

  20. Taking the Next Step: Combining Incrementally Valid Indicators to Improve Recidivism Prediction

    ERIC Educational Resources Information Center

    Walters, Glenn D.

    2011-01-01

    The possibility of combining indicators to improve recidivism prediction was evaluated in a sample of released federal prisoners randomly divided into a derivation subsample (n = 550) and a cross-validation subsample (n = 551). Five incrementally valid indicators were selected from five domains: demographic (age), historical (prior convictions),…

  1. An analytical method for predicting postwildfire peak discharges

    USGS Publications Warehouse

    Moody, John A.

    2012-01-01

    An analytical method presented here that predicts postwildfire peak discharge was developed from analysis of paired rainfall and runoff measurements collected from selected burned basins. Data were collected from 19 mountainous basins burned by eight wildfires in different hydroclimatic regimes in the western United States (California, Colorado, Nevada, New Mexico, and South Dakota). Most of the data were collected for the year of the wildfire and for 3 to 4 years after the wildfire. These data provide some estimate of the changes with time of postwildfire peak discharges, which are known to be transient but have received little documentation. The only required inputs for the analytical method are the burned area and a quantitative measure of soil burn severity (change in the normalized burn ratio), which is derived from Landsat reflectance data and is available from either the U.S. Department of Agriculture Forest Service or the U.S. Geological Survey. The method predicts the postwildfire peak discharge per unit burned area for the year of a wildfire, the first year after a wildfire, and the second year after a wildfire. It can be used at three levels of information depending on the data available to the user; each subsequent level requires either more data or more processing of the data. Level 1 requires only the burned area. Level 2 requires the burned area and the basin average value of the change in the normalized burn ratio. Level 3 requires the burned area and the calculation of the hydraulic functional connectivity, which is a variable that incorporates the sequence of soil burn severity along hillslope flow paths within the burned basin. Measurements indicate that the unit peak discharge response increases abruptly when the 30-minute maximum rainfall intensity is greater than about 5 millimeters per hour (0.2 inches per hour). This threshold may relate to a change in runoff generation from saturated-excess to infiltration-excess overland flow. The

  2. A Case for Transforming the Criterion of a Predictive Validity Study

    ERIC Educational Resources Information Center

    Patterson, Brian F.; Kobrin, Jennifer L.

    2011-01-01

    This study presents a case for applying a transformation (Box and Cox, 1964) of the criterion used in predictive validity studies. The goals of the transformation were to better meet the assumptions of the linear regression model and to reduce the residual variance of fitted (i.e., predicted) values. Using data for the 2008 cohort of first-time,…

  3. An analytic solution for numerical modeling validation in electromagnetics: the resistive sphere

    NASA Astrophysics Data System (ADS)

    Swidinsky, Andrei; Liu, Lifei

    2017-11-01

    We derive the electromagnetic response of a resistive sphere to an electric dipole source buried in a conductive whole space. The solution consists of an infinite series of spherical Bessel functions and associated Legendre polynomials, and follows the well-studied problem of a conductive sphere buried in a resistive whole space in the presence of a magnetic dipole. Our result is particularly useful for controlled-source electromagnetic problems using a grounded electric dipole transmitter and can be used to check numerical methods of calculating the response of resistive targets (such as finite difference, finite volume, finite element and integral equation). While we elect to focus on the resistive sphere in our examples, the expressions in this paper are completely general and allow for arbitrary source frequency, sphere radius, transmitter position, receiver position and sphere/host conductivity contrast so that conductive target responses can also be checked. Commonly used mesh validation techniques consist of comparisons against other numerical codes, but such solutions may not always be reliable or readily available. Alternatively, the response of simple 1-D models can be tested against well-known whole space, half-space and layered earth solutions, but such an approach is inadequate for validating models with curved surfaces. We demonstrate that our theoretical results can be used as a complementary validation tool by comparing analytic electric fields to those calculated through a finite-element analysis; the software implementation of this infinite series solution is made available for direct and immediate application.

  4. Cancer imaging phenomics toolkit: quantitative imaging analytics for precision diagnostics and predictive modeling of clinical outcome.

    PubMed

    Davatzikos, Christos; Rathore, Saima; Bakas, Spyridon; Pati, Sarthak; Bergman, Mark; Kalarot, Ratheesh; Sridharan, Patmaa; Gastounioti, Aimilia; Jahani, Nariman; Cohen, Eric; Akbari, Hamed; Tunc, Birkan; Doshi, Jimit; Parker, Drew; Hsieh, Michael; Sotiras, Aristeidis; Li, Hongming; Ou, Yangming; Doot, Robert K; Bilello, Michel; Fan, Yong; Shinohara, Russell T; Yushkevich, Paul; Verma, Ragini; Kontos, Despina

    2018-01-01

    The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.

  5. The Kuder Occupational Interest Inventory as a Moderator of Its Predictive Validity.

    ERIC Educational Resources Information Center

    Hansen, Chris J.; Zytowski, Donald G.

    1979-01-01

    A measure of the extent to which the Kuder Occupational Interest Survey (KOIS) was predictive of occupational membership for an individual was correlated with KOIS item and scale scores. Results indicated that the KOIS was a moderator of its own predictive validity. (Author/JKS)

  6. Predictive validity of the Braden Scale, Norton Scale, and Waterlow Scale in the Czech Republic.

    PubMed

    Šateková, Lenka; Žiaková, Katarína; Zeleníková, Renáta

    2017-02-01

    The aim of this study was to determine the predictive validity of the Braden, Norton, and Waterlow scales in 2 long-term care departments in the Czech Republic. Assessing the risk for developing pressure ulcers is the first step in their prevention. At present, many scales are used in clinical practice, but most of them have not been properly validated yet (for example, the Modified Norton Scale in the Czech Republic). In the Czech Republic, only the Braden Scale has been validated so far. This is a prospective comparative instrument testing study. A random sample of 123 patients was recruited. The predictive validity of the pressure ulcer risk assessment scales was evaluated based on sensitivity, specificity, positive and negative predictive values, and the area under the receiver operating characteristic curve. The data were collected from April to August 2014. In the present study, the best predictive validity values were observed for the Norton Scale, followed by the Braden Scale and the Waterlow Scale, in that order. We recommended that the above 3 pressure ulcer risk assessment scales continue to be evaluated in the Czech clinical setting. © 2016 John Wiley & Sons Australia, Ltd.

  7. Investigating Postgraduate College Admission Interviews: Generalizability Theory Reliability and Incremental Predictive Validity

    ERIC Educational Resources Information Center

    Arce-Ferrer, Alvaro J.; Castillo, Irene Borges

    2007-01-01

    The use of face-to-face interviews is controversial for college admissions decisions in light of the lack of availability of validity and reliability evidence for most college admission processes. This study investigated reliability and incremental predictive validity of a face-to-face postgraduate college admission interview with a sample of…

  8. The Validity of College Grade Prediction Equations Over Time.

    ERIC Educational Resources Information Center

    Sawyer, Richard L.; Maxey, James

    A sample of 260 colleges was surveyed during the years 1972-1976 to determine the validity of predicting college freshmen grades from standardized test scores and high school grades using the American College Testing (ACT) Assessment Program, an evaluative and placement service for students and educators involved in the transition from high school…

  9. Thirty-Year Stability and Predictive Validity of Vocational Interests

    ERIC Educational Resources Information Center

    Rottinghaus, Patrick J.; Coon, Kristin L.; Gaffey, Abigail R.; Zytowski, Donald G.

    2007-01-01

    This study reports a 30-year follow-up of 107 former high school juniors and seniors from a rural Midwestern community who completed the Kuder Occupational Interest Survey (KOIS) in 1975 and 2005. Absolute, intra-individual, and test-retest stability of interests, and predictive validity of occupations were examined. Results showed minor absolute…

  10. Neurocognition and community outcome in schizophrenia: long-term predictive validity.

    PubMed

    Fujii, Daryl E; Wylie, A Michael

    2003-02-01

    The present study examined the predictive validity of neuropsychological measures to functional outcome in 26 schizophrenic patients 15-plus year post-testing. Outcome measures included score on the Resource Associated Functional Level Scale (RAFLS), number of state hospital admissions, and total duration of state hospital inpatient stay. Results of several stepwise multiple regressions revealed that verbal memory significantly predicted RAFLS score, accounting for nearly half of the variance. Trails B significantly predicted duration of state hospital inpatient status. Discussion focused on the utility of these measures for clinicians and system planners. Copyright 2002 Elsevier Science B.V.

  11. Developing and validating risk prediction models in an individual participant data meta-analysis

    PubMed Central

    2014-01-01

    Background Risk prediction models estimate the risk of developing future outcomes for individuals based on one or more underlying characteristics (predictors). We review how researchers develop and validate risk prediction models within an individual participant data (IPD) meta-analysis, in order to assess the feasibility and conduct of the approach. Methods A qualitative review of the aims, methodology, and reporting in 15 articles that developed a risk prediction model using IPD from multiple studies. Results The IPD approach offers many opportunities but methodological challenges exist, including: unavailability of requested IPD, missing patient data and predictors, and between-study heterogeneity in methods of measurement, outcome definitions and predictor effects. Most articles develop their model using IPD from all available studies and perform only an internal validation (on the same set of data). Ten of the 15 articles did not allow for any study differences in baseline risk (intercepts), potentially limiting their model’s applicability and performance in some populations. Only two articles used external validation (on different data), including a novel method which develops the model on all but one of the IPD studies, tests performance in the excluded study, and repeats by rotating the omitted study. Conclusions An IPD meta-analysis offers unique opportunities for risk prediction research. Researchers can make more of this by allowing separate model intercept terms for each study (population) to improve generalisability, and by using ‘internal-external cross-validation’ to simultaneously develop and validate their model. Methodological challenges can be reduced by prospectively planned collaborations that share IPD for risk prediction. PMID:24397587

  12. Analytic cognitive style, not delusional ideation, predicts data gathering in a large beads task study.

    PubMed

    Ross, Robert M; Pennycook, Gordon; McKay, Ryan; Gervais, Will M; Langdon, Robyn; Coltheart, Max

    2016-07-01

    It has been proposed that deluded and delusion-prone individuals gather less evidence before forming beliefs than those who are not deluded or delusion-prone. The primary source of evidence for this "jumping to conclusions" (JTC) bias is provided by research that utilises the "beads task" data-gathering paradigm. However, the cognitive mechanisms subserving data gathering in this task are poorly understood. In the largest published beads task study to date (n = 558), we examined data gathering in the context of influential dual-process theories of reasoning. Analytic cognitive style (the willingness or disposition to critically evaluate outputs from intuitive processing and engage in effortful analytic processing) predicted data gathering in a non-clinical sample, but delusional ideation did not. The relationship between data gathering and analytic cognitive style suggests that dual-process theories of reasoning can contribute to our understanding of the beads task. It is not clear why delusional ideation was not found to be associated with data gathering or analytic cognitive style.

  13. Predictive Validity of Explicit and Implicit Threat Overestimation in Contamination Fear

    PubMed Central

    Green, Jennifer S.; Teachman, Bethany A.

    2012-01-01

    We examined the predictive validity of explicit and implicit measures of threat overestimation in relation to contamination-fear outcomes using structural equation modeling. Undergraduate students high in contamination fear (N = 56) completed explicit measures of contamination threat likelihood and severity, as well as looming vulnerability cognitions, in addition to an implicit measure of danger associations with potential contaminants. Participants also completed measures of contamination-fear symptoms, as well as subjective distress and avoidance during a behavioral avoidance task, and state looming vulnerability cognitions during an exposure task. The latent explicit (but not implicit) threat overestimation variable was a significant and unique predictor of contamination fear symptoms and self-reported affective and cognitive facets of contamination fear. On the contrary, the implicit (but not explicit) latent measure predicted behavioral avoidance (at the level of a trend). Results are discussed in terms of differential predictive validity of implicit versus explicit markers of threat processing and multiple fear response systems. PMID:24073390

  14. Prediction of Primary Care Depression Outcomes at Six Months: Validation of DOC-6 ©.

    PubMed

    Angstman, Kurt B; Garrison, Gregory M; Gonzalez, Cesar A; Cozine, Daniel W; Cozine, Elizabeth W; Katzelnick, David J

    2017-01-01

    The goal of this study was to develop and validate an assessment tool for adult primary care patients diagnosed with depression to determine predictive probability of clinical outcomes at 6 months. We retrospectively reviewed 3096 adult patients enrolled in collaborative care management (CCM) for depression. Patients enrolled on or before December 31, 2013, served as the training set (n = 2525), whereas those enrolled after that date served as the preliminary validation set (n = 571). Six variables (2 demographic and 4 clinical) were statistically significant in determining clinical outcomes. Using the validation data set, the remission classifier produced the receiver operating characteristics (ROC) curve with a c-statistic or area under the curve (AUC) of 0.62 with predicted probabilities than ranged from 14.5% to 79.1%, with a median of 50.6%. The persistent depressive symptoms (PDS) classifier produced an ROC curve with a c-statistic or AUC of 0.67 and predicted probabilities that ranged from 5.5% to 73.1%, with a median of 23.5%. We were able to identify readily available variables and then validated these in the prediction of depression remission and PDS at 6 months. The DOC-6 tool may be used to predict which patients may be at risk for worse outcomes. © Copyright 2017 by the American Board of Family Medicine.

  15. Predictive Validity Study of the APS Writing and Reading Tests [and] Validating Placement Rules for the APS Writing Test.

    ERIC Educational Resources Information Center

    College of the Canyons, Valencia, CA. Office of Institutional Development.

    California's College of the Canyons has used the College Board Assessment and Placement Services (APS) test to assess students' abilities in basic and college English since spring 1993. These two reports summarize data from a May 1994 study of the predictive validity of the APS writing and reading tests and a June 1994 effort to validate the cut…

  16. The Predictive Validity of Savry Ratings for Assessing Youth Offenders in Singapore

    PubMed Central

    Chu, Chi Meng; Goh, Mui Leng; Chong, Dominic

    2015-01-01

    Empirical support for the usage of the SAVRY has been reported in studies conducted in many Western contexts, but not in a Singaporean context. This study compared the predictive validity of the SAVRY ratings for violent and general recidivism against the Youth Level of Service/Case Management Inventory (YLS/CMI) ratings within the Singaporean context. Using a sample of 165 male young offenders (Mfollow-up = 4.54 years), results showed that the SAVRY Total Score and Summary Risk Rating, as well as YLS/CMI Total Score and Overall Risk Rating, predicted violent and general recidivism. SAVRY Protective Total Score was only significantly predictive of desistance from general recidivism, and did not show incremental predictive validity for violent and general recidivism over the SAVRY Total Score. Overall, the results suggest that the SAVRY is suited (to varying degrees) for assessing the risk of violent and general recidivism in young offenders within the Singaporean context, but might not be better than the YLS/CMI. PMID:27231403

  17. A Public-Private Partnership Develops and Externally Validates a 30-Day Hospital Readmission Risk Prediction Model

    PubMed Central

    Choudhry, Shahid A.; Li, Jing; Davis, Darcy; Erdmann, Cole; Sikka, Rishi; Sutariya, Bharat

    2013-01-01

    Introduction: Preventing the occurrence of hospital readmissions is needed to improve quality of care and foster population health across the care continuum. Hospitals are being held accountable for improving transitions of care to avert unnecessary readmissions. Advocate Health Care in Chicago and Cerner (ACC) collaborated to develop all-cause, 30-day hospital readmission risk prediction models to identify patients that need interventional resources. Ideally, prediction models should encompass several qualities: they should have high predictive ability; use reliable and clinically relevant data; use vigorous performance metrics to assess the models; be validated in populations where they are applied; and be scalable in heterogeneous populations. However, a systematic review of prediction models for hospital readmission risk determined that most performed poorly (average C-statistic of 0.66) and efforts to improve their performance are needed for widespread usage. Methods: The ACC team incorporated electronic health record data, utilized a mixed-method approach to evaluate risk factors, and externally validated their prediction models for generalizability. Inclusion and exclusion criteria were applied on the patient cohort and then split for derivation and internal validation. Stepwise logistic regression was performed to develop two predictive models: one for admission and one for discharge. The prediction models were assessed for discrimination ability, calibration, overall performance, and then externally validated. Results: The ACC Admission and Discharge Models demonstrated modest discrimination ability during derivation, internal and external validation post-recalibration (C-statistic of 0.76 and 0.78, respectively), and reasonable model fit during external validation for utility in heterogeneous populations. Conclusions: The ACC Admission and Discharge Models embody the design qualities of ideal prediction models. The ACC plans to continue its partnership to

  18. Analytical and numerical prediction of harmonic sound power in the inlet of aero-engines with emphasis on transonic rotation speeds

    NASA Astrophysics Data System (ADS)

    Lewy, Serge; Polacsek, Cyril; Barrier, Raphael

    2014-12-01

    Tone noise radiated through the inlet of a turbofan is mainly due to rotor-stator interactions at subsonic regimes (approach flight), and to the shock waves attached to each blade at supersonic helical tip speeds (takeoff). The axial compressor of a helicopter turboshaft engine is transonic as well and can be studied like turbofans at takeoff. The objective of the paper is to predict the sound power at the inlet radiating into the free field, with a focus on transonic conditions because sound levels are much higher. Direct numerical computation of tone acoustic power is based on a RANS (Reynolds averaged Navier-Stokes) solver followed by an integration of acoustic intensity over specified inlet cross-sections, derived from Cantrell and Hart equations (valid in irrotational flows). In transonic regimes, sound power decreases along the intake because of nonlinear propagation, which must be discriminated from numerical dissipation. This is one of the reasons why an analytical approach is also suggested. It is based on three steps: (i) appraisal of the initial pressure jump of the shock waves; (ii) 2D nonlinear propagation model of Morfey and Fisher; (iii) calculation of the sound power of the 3D ducted acoustic field. In this model, all the blades are assumed to be identical such that only the blade passing frequency and its harmonics are predicted (like in the present numerical simulations). However, transfer from blade passing frequency to multiple pure tones can be evaluated in a fourth step through a statistical analysis of irregularities between blades. Interest of the analytical method is to provide a good estimate of nonlinear acoustic propagation in the upstream duct while being easy and fast to compute. The various methods are applied to two turbofan models, respectively in approach (subsonic) and takeoff (transonic) conditions, and to a Turbomeca turboshaft engine (transonic case). The analytical method in transonic appears to be quite reliable by comparison

  19. Assessing Attachment Security With the Attachment Q Sort: Meta-Analytic Evidence for the Validity of the Observer AQS

    ERIC Educational Resources Information Center

    van I Jzendoorn,Marinus H.; Vereijken, Carolus M.J.L.; Bakermans-Kranenburg, Marian J.; Riksen-Walraven, Marianne J.

    2004-01-01

    The reliability and validity of the Attachment Q Sort (AQS; Waters & Deane, 1985) was tested in a series of meta-analyses on 139 studies with 13,835 children. The observer AQS security score showed convergent validity with Strange Situation procedure (SSP) security (r=31) and excellent predictive validity with sensitivity measures (r=39). Its…

  20. Analytical model for force prediction when machining metal matrix composites

    NASA Astrophysics Data System (ADS)

    Sikder, Snahungshu

    Metal Matrix Composites (MMC) offer several thermo-mechanical advantages over standard materials and alloys which make them better candidates in different applications. Their light weight, high stiffness, and strength have attracted several industries such as automotive, aerospace, and defence for their wide range of products. However, the wide spread application of Meal Matrix Composites is still a challenge for industry. The hard and abrasive nature of the reinforcement particles is responsible for rapid tool wear and high machining costs. Fracture and debonding of the abrasive reinforcement particles are the considerable damage modes that directly influence the tool performance. It is very important to find highly effective way to machine MMCs. So, it is important to predict forces when machining Metal Matrix Composites because this will help to choose perfect tools for machining and ultimately save both money and time. This research presents an analytical force model for predicting the forces generated during machining of Metal Matrix Composites. In estimating the generated forces, several aspects of cutting mechanics were considered including: shearing force, ploughing force, and particle fracture force. Chip formation force was obtained by classical orthogonal metal cutting mechanics and the Johnson-Cook Equation. The ploughing force was formulated while the fracture force was calculated from the slip line field theory and the Griffith theory of failure. The predicted results were compared with previously measured data. The results showed very good agreement between the theoretically predicted and experimentally measured cutting forces.

  1. TOPEX Microwave Radiometer - Thermal design verification test and analytical model validation

    NASA Technical Reports Server (NTRS)

    Lin, Edward I.

    1992-01-01

    The testing of the TOPEX Microwave Radiometer (TMR) is described in terms of hardware development based on the modeling and thermal vacuum testing conducted. The TMR and the vacuum-test facility are described, and the thermal verification test includes a hot steady-state segment, a cold steady-state segment, and a cold survival mode segment totalling 65 hours. A graphic description is given of the test history which is related temperature tracking, and two multinode TMR test-chamber models are compared to the test results. Large discrepancies between the test data and the model predictions are attributed to contact conductance, effective emittance from the multilayer insulation, and heat leaks related to deviations from the flight configuration. The TMR thermal testing/modeling effort is shown to provide technical corrections for the procedure outlined, and the need for validating predictive models is underscored.

  2. Derivation and validation of in-hospital mortality prediction models in ischaemic stroke patients using administrative data.

    PubMed

    Lee, Jason; Morishima, Toshitaka; Kunisawa, Susumu; Sasaki, Noriko; Otsubo, Tetsuya; Ikai, Hiroshi; Imanaka, Yuichi

    2013-01-01

    Stroke and other cerebrovascular diseases are a major cause of death and disability. Predicting in-hospital mortality in ischaemic stroke patients can help to identify high-risk patients and guide treatment approaches. Chart reviews provide important clinical information for mortality prediction, but are laborious and limiting in sample sizes. Administrative data allow for large-scale multi-institutional analyses but lack the necessary clinical information for outcome research. However, administrative claims data in Japan has seen the recent inclusion of patient consciousness and disability information, which may allow more accurate mortality prediction using administrative data alone. The aim of this study was to derive and validate models to predict in-hospital mortality in patients admitted for ischaemic stroke using administrative data. The sample consisted of 21,445 patients from 176 Japanese hospitals, who were randomly divided into derivation and validation subgroups. Multivariable logistic regression models were developed using 7- and 30-day and overall in-hospital mortality as dependent variables. Independent variables included patient age, sex, comorbidities upon admission, Japan Coma Scale (JCS) score, Barthel Index score, modified Rankin Scale (mRS) score, and admissions after hours and on weekends/public holidays. Models were developed in the derivation subgroup, and coefficients from these models were applied to the validation subgroup. Predictive ability was analysed using C-statistics; calibration was evaluated with Hosmer-Lemeshow χ(2) tests. All three models showed predictive abilities similar or surpassing that of chart review-based models. The C-statistics were highest in the 7-day in-hospital mortality prediction model, at 0.906 and 0.901 in the derivation and validation subgroups, respectively. For the 30-day in-hospital mortality prediction models, the C-statistics for the derivation and validation subgroups were 0.893 and 0

  3. Galaxy Formation At Extreme Redshifts: Semi-Analytic Model Predictions And Challenges For Observations

    NASA Astrophysics Data System (ADS)

    Yung, L. Y. Aaron; Somerville, Rachel S.

    2017-06-01

    The well-established Santa Cruz semi-analytic galaxy formation framework has been shown to be quite successful at explaining observations in the local Universe, as well as making predictions for low-redshift observations. Recently, metallicity-based gas partitioning and H2-based star formation recipes have been implemented in our model, replacing the legacy cold-gas based recipe. We then use our revised model to explore the high-redshift Universe and make predictions up to z = 15. Although our model is only calibrated to observations from the local universe, our predictions seem to match incredibly well with mid- to high-redshift observational constraints available-to-date, including rest-frame UV luminosity functions and the reionization history as constrained by CMB and IGM observations. We provide predictions for individual and statistical galaxy properties at a wide range of redshifts (z = 4 - 15), including objects that are too far or too faint to be detected with current facilities. And using our model predictions, we also provide forecasted luminosity functions and other observables for upcoming studies with JWST.

  4. Noise Certification Predictions for FJX-2-Powered Aircraft Using Analytic Methods

    NASA Technical Reports Server (NTRS)

    Berton, Jeffrey J.

    1999-01-01

    Williams International Co. is currently developing the 700-pound thrust class FJX-2 turbofan engine for the general Aviation Propulsion Program's Turbine Engine Element. As part of the 1996 NASA-Williams cooperative working agreement, NASA agreed to analytically calculate the noise certification levels of the FJX-2-powered V-Jet II test bed aircraft. Although the V-Jet II is a demonstration aircraft that is unlikely to be produced and certified, the noise results presented here may be considered to be representative of the noise levels of small, general aviation jet aircraft that the FJX-2 would power. A single engine variant of the V-Jet II, the V-Jet I concept airplane, is also considered. Reported in this paper are the analytically predicted FJX-2/V-Jet noise levels appropriate for Federal Aviation Regulation certification. Also reported are FJX-2/V-Jet noise levels using noise metrics appropriate for the propeller-driven aircraft that will be its major market competition, as well as a sensitivity analysis of the certification noise levels to major system uncertainties.

  5. Research prioritization through prediction of future impact on biomedical science: a position paper on inference-analytics.

    PubMed

    Ganapathiraju, Madhavi K; Orii, Naoki

    2013-08-30

    Advances in biotechnology have created "big-data" situations in molecular and cellular biology. Several sophisticated algorithms have been developed that process big data to generate hundreds of biomedical hypotheses (or predictions). The bottleneck to translating this large number of biological hypotheses is that each of them needs to be studied by experimentation for interpreting its functional significance. Even when the predictions are estimated to be very accurate, from a biologist's perspective, the choice of which of these predictions is to be studied further is made based on factors like availability of reagents and resources and the possibility of formulating some reasonable hypothesis about its biological relevance. When viewed from a global perspective, say from that of a federal funding agency, ideally the choice of which prediction should be studied would be made based on which of them can make the most translational impact. We propose that algorithms be developed to identify which of the computationally generated hypotheses have potential for high translational impact; this way, funding agencies and scientific community can invest resources and drive the research based on a global view of biomedical impact without being deterred by local view of feasibility. In short, data-analytic algorithms analyze big-data and generate hypotheses; in contrast, the proposed inference-analytic algorithms analyze these hypotheses and rank them by predicted biological impact. We demonstrate this through the development of an algorithm to predict biomedical impact of protein-protein interactions (PPIs) which is estimated by the number of future publications that cite the paper which originally reported the PPI. This position paper describes a new computational problem that is relevant in the era of big-data and discusses the challenges that exist in studying this problem, highlighting the need for the scientific community to engage in this line of research. The proposed

  6. Analytical modeling of the temporal evolution of hot spot temperatures in silicon solar cells

    NASA Astrophysics Data System (ADS)

    Wasmer, Sven; Rajsrima, Narong; Geisemeyer, Ino; Fertig, Fabian; Greulich, Johannes Michael; Rein, Stefan

    2018-03-01

    We present an approach to predict the equilibrium temperature of hot spots in crystalline silicon solar cells based on the analysis of their temporal evolution right after turning on a reverse bias. For this end, we derive an analytical expression for the time-dependent heat diffusion of a breakdown channel that is assumed to be cylindrical. We validate this by means of thermography imaging of hot spots right after turning on a reverse bias. The expression allows to be used to extract hot spot powers and radii from short-term measurements, targeting application in inline solar cell characterization. The extracted hot spot powers are validated at the hands of long-term dark lock-in thermography imaging. Using a look-up table of expected equilibrium temperatures determined by numerical and analytical simulations, we utilize the determined hot spot properties to predict the equilibrium temperatures of about 100 industrial aluminum back-surface field solar cells and achieve a high correlation coefficient of 0.86 and a mean absolute error of only 3.3 K.

  7. Development and validation of a predictive model for excessive postpartum blood loss: A retrospective, cohort study.

    PubMed

    Rubio-Álvarez, Ana; Molina-Alarcón, Milagros; Arias-Arias, Ángel; Hernández-Martínez, Antonio

    2018-03-01

    postpartum haemorrhage is one of the leading causes of maternal morbidity and mortality worldwide. Despite the use of uterotonics agents as preventive measure, it remains a challenge to identify those women who are at increased risk of postpartum bleeding. to develop and to validate a predictive model to assess the risk of excessive bleeding in women with vaginal birth. retrospective cohorts study. "Mancha-Centro Hospital" (Spain). the elaboration of the predictive model was based on a derivation cohort consisting of 2336 women between 2009 and 2011. For validation purposes, a prospective cohort of 953 women between 2013 and 2014 were employed. Women with antenatal fetal demise, multiple pregnancies and gestations under 35 weeks were excluded METHODS: we used a multivariate analysis with binary logistic regression, Ridge Regression and areas under the Receiver Operating Characteristic curves to determine the predictive ability of the proposed model. there was 197 (8.43%) women with excessive bleeding in the derivation cohort and 63 (6.61%) women in the validation cohort. Predictive factors in the final model were: maternal age, primiparity, duration of the first and second stages of labour, neonatal birth weight and antepartum haemoglobin levels. Accordingly, the predictive ability of this model in the derivation cohort was 0.90 (95% CI: 0.85-0.93), while it remained 0.83 (95% CI: 0.74-0.92) in the validation cohort. this predictive model is proved to have an excellent predictive ability in the derivation cohort, and its validation in a latter population equally shows a good ability for prediction. This model can be employed to identify women with a higher risk of postpartum haemorrhage. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Real-time sensor data validation

    NASA Technical Reports Server (NTRS)

    Bickmore, Timothy W.

    1994-01-01

    This report describes the status of an on-going effort to develop software capable of detecting sensor failures on rocket engines in real time. This software could be used in a rocket engine controller to prevent the erroneous shutdown of an engine due to sensor failures which would otherwise be interpreted as engine failures by the control software. The approach taken combines analytical redundancy with Bayesian belief networks to provide a solution which has well defined real-time characteristics and well-defined error rates. Analytical redundancy is a technique in which a sensor's value is predicted by using values from other sensors and known or empirically derived mathematical relations. A set of sensors and a set of relations among them form a network of cross-checks which can be used to periodically validate all of the sensors in the network. Bayesian belief networks provide a method of determining if each of the sensors in the network is valid, given the results of the cross-checks. This approach has been successfully demonstrated on the Technology Test Bed Engine at the NASA Marshall Space Flight Center. Current efforts are focused on extending the system to provide a validation capability for 100 sensors on the Space Shuttle Main Engine.

  9. [Validation of the Glasgow-Blatchford Scoring System to predict mortality in patients with upper gastrointestinal bleeding in a hospital of Lima, Peru (June 2012-December 2013)].

    PubMed

    Cassana, Alessandra; Scialom, Silvia; Segura, Eddy R; Chacaltana, Alfonso

    2015-07-01

    Upper gastrointestinal bleeding is a major cause of hospitalization and the most prevalent emergency worldwide, with a mortality rate of up to 14%. In Peru, there have not been any studies on the use of the Glasgow-Blatchford Scoring System to predict mortality in upper gastrointestinal bleeding. The aim of this study is to perform an external validation of the Glasgow-Blatchford Scoring System and to establish the best cutoff for predicting mortality in upper gastrointestinal bleeding in a hospital of Lima, Peru. This was a longitudinal, retrospective, analytical validation study, with data from patients with a clinical and endoscopic diagnosis of upper gastrointestinal bleeding treated at the Gastrointestinal Hemorrhage Unit of the Hospital Nacional Edgardo Rebagliati Martins between June 2012 and December 2013. We calculated the area under the curve for the receiver operating characteristic of the Glasgow-Blatchford Scoring System to predict mortality with a 95% confidence interval. A total of 339 records were analyzed. 57.5% were male and the mean age (standard deviation) was 67.0 (15.7) years. The median of the Glasgow-Blatchford Scoring System obtained in the population was 12. The ROC analysis for death gave an area under the curve of 0.59 (95% CI 0.5-0.7). Stratifying by type of upper gastrointestinal bleeding resulted in an area under the curve of 0.66 (95% CI 0.53-0.78) for non-variceal type. In this population, the Glasgow-Blatchford Scoring System has no diagnostic validity for predicting mortality.

  10. Responsiveness and predictive validity of the tablet-based symbol digit modalities test in patients with stroke.

    PubMed

    Hsiao, Pei-Chi; Yu, Wan-Hui; Lee, Shih-Chieh; Chen, Mei-Hsiang; Hsieh, Ching-Lin

    2018-06-14

    The responsiveness and predictive validity of the Tablet-based Symbol Digit Modalities Test (T-SDMT) are unknown, which limits the utility of the T-SDMT in both clinical and research settings. The purpose of this study was to examine the responsiveness and predictive validity of the T-SDMT in inpatients with stroke. A follow-up, repeated-assessments design. One rehabilitation unit at a local medical center. A total of 50 inpatients receiving rehabilitation completed T-SDMT assessments at admission to and discharge from a rehabilitation ward. The median follow-up period was 14 days. The Barthel index (BI) was assessed at discharge and was used as the criterion of the predictive validity. The mean changes in the T-SDMT scores between admission and discharge were statistically significant (paired t-test = 3.46, p = 0.001). The T-SDMT scores showed a nearly moderate standardized response mean (0.49). A moderate association (Pearson's r = 0.47) was found between the scores of the T-SDMT at admission and those of the BI at discharge, indicating good predictive validity of the T-SDMT. Our results support the responsiveness and predictive validity of the T-SDMT in patients with stroke receiving rehabilitation in hospitals. This study provides empirical evidence supporting the use of the T-SDMT as an outcome measure for assessing processingspeed in inpatients with stroke. The scores of the T-SDMT could be used to predict basic activities of daily living function in inpatients with stroke.

  11. The Predictive Validity of the ABFM's In-Training Examination.

    PubMed

    O'Neill, Thomas R; Li, Zijia; Peabody, Michael R; Lybarger, Melanie; Royal, Kenneth; Puffer, James C

    2015-05-01

    Our objective was to examine the predictive validity of the American Board of Family Medicine's (ABFM) In-Training Examination (ITE) with regard to predicting outcomes on the ABFM certification examination. This study used a repeated measures design across three levels of medical training (PGY1--PGY2, PGY2--PGY3, and PGY3--initial certification) with three different cohorts (2010--2011, 2011--2012, and 2012--2013) to examine: (1) how well the residents' ITE scores correlated with their test scores in the following year, (2) what the typical score increase was across training years, and (3) what was the sensitivity, specificity, positive predictive value, and negative predictive value of the PGY3 scores with regard to predicting future results on the MC-FP Examination. ITE scores generally correlate at about .7 with the following year's ITE or with the following year's certification examination. The mean growth from PGY1 to PGY2 was 52 points, from PGY2 to PGY3 was 34 points, and from PGY3 to initial certification was 27 points. The sensitivity, specificity, positive predictive value, and negative predictive value were .91, .47, .96, and .27, respectively. The ITE is a useful predictor of future ITE and initial certification examination performance.

  12. Efficient strategies for leave-one-out cross validation for genomic best linear unbiased prediction.

    PubMed

    Cheng, Hao; Garrick, Dorian J; Fernando, Rohan L

    2017-01-01

    A random multiple-regression model that simultaneously fit all allele substitution effects for additive markers or haplotypes as uncorrelated random effects was proposed for Best Linear Unbiased Prediction, using whole-genome data. Leave-one-out cross validation can be used to quantify the predictive ability of a statistical model. Naive application of Leave-one-out cross validation is computationally intensive because the training and validation analyses need to be repeated n times, once for each observation. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis. Efficient Leave-one-out cross validation strategies is 786 times faster than the naive application for a simulated dataset with 1,000 observations and 10,000 markers and 99 times faster with 1,000 observations and 100 markers. These efficiencies relative to the naive approach using the same model will increase with increases in the number of observations. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.

  13. Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher; Sebok, Angelia; Gore, Brian; Hooey, Becky

    2012-01-01

    We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures.

  14. Analytic cognitive style predicts paranormal explanations of anomalous experiences but not the experiences themselves: Implications for cognitive theories of delusions.

    PubMed

    Ross, Robert M; Hartig, Bjoern; McKay, Ryan

    2017-09-01

    It has been proposed that delusional beliefs are attempts to explain anomalous experiences. Why, then, do anomalous experiences induce delusions in some people but not in others? One possibility is that people with delusions have reasoning biases that result in them failing to reject implausible candidate explanations for anomalous experiences. We examine this hypothesis by studying paranormal interpretations of anomalous experiences. We examined whether analytic cognitive style (i.e. the willingness or disposition to critically evaluate outputs from intuitive processing and engage in effortful analytic processing) predicted anomalous experiences and paranormal explanations for these experiences after controlling for demographic variables and cognitive ability. Analytic cognitive style predicted paranormal explanations for anomalous experiences, but not the anomalous experiences themselves. We did not study clinical delusions. Our attempts to control for cognitive ability may have been inadequate. Our sample was predominantly students. Limited analytic cognitive style might contribute to the interpretation of anomalous experiences in terms of delusional beliefs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. The Predictive Validity of the Metropolitan Readiness Tests, 1976 Edition.

    ERIC Educational Resources Information Center

    Nagle, Richard J.

    1979-01-01

    A sample of 176 first-grade children was tested on the Metropolitan Readiness Tests, 1976 Edition (MRT), during the initial month of school and was retested eight months later on the Stanford Achievement Test. Results demonstrated substantial validity of the MRT for predicting first-grade achievement. (Author/CTM)

  16. Parent- and Self-Reported Dimensions of Oppositionality in Youth: Construct Validity, Concurrent Validity, and the Prediction of Criminal Outcomes in Adulthood

    ERIC Educational Resources Information Center

    Aebi, Marcel; Plattner, Belinda; Metzke, Christa Winkler; Bessler, Cornelia; Steinhausen, Hans-Christoph

    2013-01-01

    Background: Different dimensions of oppositional defiant disorder (ODD) have been found as valid predictors of further mental health problems and antisocial behaviors in youth. The present study aimed at testing the construct, concurrent, and predictive validity of ODD dimensions derived from parent- and self-report measures. Method: Confirmatory…

  17. The Predictive Validity of Four Intelligence Tests for School Grades: A Small Sample Longitudinal Study

    PubMed Central

    Gygi, Jasmin T.; Hagmann-von Arx, Priska; Schweizer, Florine; Grob, Alexander

    2017-01-01

    Intelligence is considered the strongest single predictor of scholastic achievement. However, little is known regarding the predictive validity of well-established intelligence tests for school grades. We analyzed the predictive validity of four widely used intelligence tests in German-speaking countries: The Intelligence and Development Scales (IDS), the Reynolds Intellectual Assessment Scales (RIAS), the Snijders-Oomen Nonverbal Intelligence Test (SON-R 6-40), and the Wechsler Intelligence Scale for Children (WISC-IV), which were individually administered to 103 children (Mage = 9.17 years) enrolled in regular school. School grades were collected longitudinally after 3 years (averaged school grades, mathematics, and language) and were available for 54 children (Mage = 11.77 years). All four tests significantly predicted averaged school grades. Furthermore, the IDS and the RIAS predicted both mathematics and language, while the SON-R 6-40 predicted mathematics. The WISC-IV showed no significant association with longitudinal scholastic achievement when mathematics and language were analyzed separately. The results revealed the predictive validity of currently used intelligence tests for longitudinal scholastic achievement in German-speaking countries and support their use in psychological practice, in particular for predicting averaged school grades. However, this conclusion has to be considered as preliminary due to the small sample of children observed. PMID:28348543

  18. An Other Perspective on Personality: Meta-Analytic Integration of Observers' Accuracy and Predictive Validity

    ERIC Educational Resources Information Center

    Connelly, Brian S.; Ones, Deniz S.

    2010-01-01

    The bulk of personality research has been built from self-report measures of personality. However, collecting personality ratings from other-raters, such as family, friends, and even strangers, is a dramatically underutilized method that allows better explanation and prediction of personality's role in many domains of psychology. Drawing…

  19. NNvPDB: Neural Network based Protein Secondary Structure Prediction with PDB Validation.

    PubMed

    Sakthivel, Seethalakshmi; S K M, Habeeb

    2015-01-01

    The predicted secondary structural states are not cross validated by any of the existing servers. Hence, information on the level of accuracy for every sequence is not reported by the existing servers. This was overcome by NNvPDB, which not only reported greater Q3 but also validates every prediction with the homologous PDB entries. NNvPDB is based on the concept of Neural Network, with a new and different approach of training the network every time with five PDB structures that are similar to query sequence. The average accuracy for helix is 76%, beta sheet is 71% and overall (helix, sheet and coil) is 66%. http://bit.srmuniv.ac.in/cgi-bin/bit/cfpdb/nnsecstruct.pl.

  20. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations.

    PubMed

    Davidson, Scott E; Cui, Jing; Kry, Stephen; Deasy, Joseph O; Ibbott, Geoffrey S; Vicic, Milos; White, R Allen; Followill, David S

    2016-08-01

    capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.

  1. Modification and validation of an analytical source model for external beam radiotherapy Monte Carlo dose calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, Scott E., E-mail: sedavids@utmb.edu

    data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. Conclusions: A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.« less

  2. Predictive and concurrent validity of the Braden scale in long-term care: a meta-analysis.

    PubMed

    Wilchesky, Machelle; Lungu, Ovidiu

    2015-01-01

    Pressure ulcer prevention is an important long-term care (LTC) quality indicator. While the Braden Scale is a recommended risk assessment tool, there is a paucity of information specifically pertaining to its validity within the LTC setting. We, therefore, undertook a systematic review and meta-analysis comparing Braden Scale predictive and concurrent validity within this context. We searched the Medline, EMBASE, PsychINFO and PubMed databases from 1985-2014 for studies containing the requisite information to analyze tool validity. Our initial search yielded 3,773 articles. Eleven datasets emanating from nine published studies describing 40,361 residents met all meta-analysis inclusion criteria and were analyzed using random effects models. Pooled sensitivity, specificity, positive predictive value (PPV), and negative predictive values were 86%, 38%, 28%, and 93%, respectively. Specificity was poorer in concurrent samples as compared with predictive samples (38% vs. 72%), while PPV was low in both sample types (25 and 37%). Though random effects model results showed that the Scale had good overall predictive ability [RR, 4.33; 95% CI, 3.28-5.72], none of the concurrent samples were found to have "optimal" sensitivity and specificity. In conclusion, the appropriateness of the Braden Scale in LTC is questionable given its low specificity and PPV, in particular in concurrent validity studies. Future studies should further explore the extent to which the apparent low validity of the Scale in LTC is due to the choice of cutoff point and/or preventive strategies implemented by LTC staff as a matter of course. © 2015 by the Wound Healing Society.

  3. Experimental Validation of Lightning-Induced Electromagnetic (Indirect) Coupling to Short Monopole Antennas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crull, E W; Brown Jr., C G; Perkins, M P

    2008-07-30

    For short monopoles in this low-power case, it has been shown that a simple circuit model is capable of accurate predictions for the shape and magnitude of the antenna response to lightning-generated electric field coupling effects, provided that the elements of the circuit model have accurate values. Numerical EM simulation can be used to provide more accurate values for the circuit elements than the simple analytical formulas, since the analytical formulas are used outside of their region of validity. However, even with the approximate analytical formulas the simple circuit model produces reasonable results, which would improve if more accurate analyticalmore » models were used. This report discusses the coupling analysis approaches taken to understand the interaction between a time-varying EM field and a short monopole antenna, within the context of lightning safety for nuclear weapons at DOE facilities. It describes the validation of a simple circuit model using laboratory study in order to understand the indirect coupling of energy into a part, and the resulting voltage. Results show that in this low-power case, the circuit model predicts peak voltages within approximately 32% using circuit component values obtained from analytical formulas and about 13% using circuit component values obtained from numerical EM simulation. We note that the analytical formulas are used outside of their region of validity. First, the antenna is insulated and not a bare wire and there are perhaps fringing field effects near the termination of the outer conductor that the formula does not take into account. Also, the effective height formula is for a monopole directly over a ground plane, while in the time-domain measurement setup the monopole is elevated above the ground plane by about 1.5-inch (refer to Figure 5).« less

  4. A Model for Investigating Predictive Validity at Highly Selective Institutions.

    ERIC Educational Resources Information Center

    Gross, Alan L.; And Others

    A statistical model for investigating predictive validity at highly selective institutions is described. When the selection ratio is small, one must typically deal with a data set containing relatively large amounts of missing data on both criterion and predictor variables. Standard statistical approaches are based on the strong assumption that…

  5. An updated PREDICT breast cancer prognostication and treatment benefit prediction model with independent validation.

    PubMed

    Candido Dos Reis, Francisco J; Wishart, Gordon C; Dicks, Ed M; Greenberg, David; Rashbass, Jem; Schmidt, Marjanka K; van den Broek, Alexandra J; Ellis, Ian O; Green, Andrew; Rakha, Emad; Maishman, Tom; Eccles, Diana M; Pharoah, Paul D P

    2017-05-22

    PREDICT is a breast cancer prognostic and treatment benefit model implemented online. The overall fit of the model has been good in multiple independent case series, but PREDICT has been shown to underestimate breast cancer specific mortality in women diagnosed under the age of 40. Another limitation is the use of discrete categories for tumour size and node status resulting in 'step' changes in risk estimates on moving between categories. We have refitted the PREDICT prognostic model using the original cohort of cases from East Anglia with updated survival time in order to take into account age at diagnosis and to smooth out the survival function for tumour size and node status. Multivariable Cox regression models were used to fit separate models for ER negative and ER positive disease. Continuous variables were fitted using fractional polynomials and a smoothed baseline hazard was obtained by regressing the baseline cumulative hazard for each patients against time using fractional polynomials. The fit of the prognostic models were then tested in three independent data sets that had also been used to validate the original version of PREDICT. In the model fitting data, after adjusting for other prognostic variables, there is an increase in risk of breast cancer specific mortality in younger and older patients with ER positive disease, with a substantial increase in risk for women diagnosed before the age of 35. In ER negative disease the risk increases slightly with age. The association between breast cancer specific mortality and both tumour size and number of positive nodes was non-linear with a more marked increase in risk with increasing size and increasing number of nodes in ER positive disease. The overall calibration and discrimination of the new version of PREDICT (v2) was good and comparable to that of the previous version in both model development and validation data sets. However, the calibration of v2 improved over v1 in patients diagnosed under the age

  6. Development and validation of immune dysfunction score to predict 28-day mortality of sepsis patients

    PubMed Central

    Fang, Wen-Feng; Douglas, Ivor S.; Chen, Yu-Mu; Lin, Chiung-Yu; Kao, Hsu-Ching; Fang, Ying-Tang; Huang, Chi-Han; Chang, Ya-Ting; Huang, Kuo-Tung; Wang, Yi-His; Wang, Chin-Chou

    2017-01-01

    Background Sepsis-induced immune dysfunction ranging from cytokines storm to immunoparalysis impacts outcomes. Monitoring immune dysfunction enables better risk stratification and mortality prediction and is mandatory before widely application of immunoadjuvant therapies. We aimed to develop and validate a scoring system according to patients’ immune dysfunction status for 28-day mortality prediction. Methods A prospective observational study from a cohort of adult sepsis patients admitted to ICU between August 2013 and June 2016 at Kaohsiung Chang Gung Memorial Hospital in Taiwan. We evaluated immune dysfunction status through measurement of baseline plasma Cytokine levels, Monocyte human leukocyte-DR expression by flow cytometry, and stimulated immune response using post LPS stimulated cytokine elevation ratio. An immune dysfunction score was created for 28-day mortality prediction and was validated. Results A total of 151 patients were enrolled. Data of the first consecutive 106 septic patients comprised the training cohort, and of other 45 patients comprised the validation cohort. Among the 106 patients, 21 died and 85 were still alive on day 28 after ICU admission. (mortality rate, 19.8%). Independent predictive factors revealed via multivariate logistic regression analysis included segmented neutrophil-to-monocyte ratio, granulocyte-colony stimulating factor, interleukin-10, and monocyte human leukocyte antigen-antigen D–related levels, all of which were selected to construct the score, which predicted 28-day mortality with area under the curve of 0.853 and 0.789 in the training and validation cohorts, respectively. Conclusions The immune dysfunction scoring system developed here included plasma granulocyte-colony stimulating factor level, interleukin-10 level, serum segmented neutrophil-to-monocyte ratio, and monocyte human leukocyte antigen-antigen D–related expression appears valid and reproducible for predicting 28-day mortality. PMID:29073262

  7. Analytical validation of an explicit finite element model of a rolling element bearing with a localised line spall

    NASA Astrophysics Data System (ADS)

    Singh, Sarabjeet; Howard, Carl Q.; Hansen, Colin H.; Köpke, Uwe G.

    2018-03-01

    In this paper, numerically modelled vibration response of a rolling element bearing with a localised outer raceway line spall is presented. The results were obtained from a finite element (FE) model of the defective bearing solved using an explicit dynamics FE software package, LS-DYNA. Time domain vibration signals of the bearing obtained directly from the FE modelling were processed further to estimate time-frequency and frequency domain results, such as spectrogram and power spectrum, using standard signal processing techniques pertinent to the vibration-based monitoring of rolling element bearings. A logical approach to analyses of the numerically modelled results was developed with an aim to presenting the analytical validation of the modelled results. While the time and frequency domain analyses of the results show that the FE model generates accurate bearing kinematics and defect frequencies, the time-frequency analysis highlights the simulation of distinct low- and high-frequency characteristic vibration signals associated with the unloading and reloading of the rolling elements as they move in and out of the defect, respectively. Favourable agreement of the numerical and analytical results demonstrates the validation of the results from the explicit FE modelling of the bearing.

  8. Choice Defines Value: A Predictive Modeling Competition in Health Preference Research.

    PubMed

    Jakubczyk, Michał; Craig, Benjamin M; Barra, Mathias; Groothuis-Oudshoorn, Catharina G M; Hartman, John D; Huynh, Elisabeth; Ramos-Goñi, Juan M; Stolk, Elly A; Rand, Kim

    2018-02-01

    To identify which specifications and approaches to model selection better predict health preferences, the International Academy of Health Preference Research (IAHPR) hosted a predictive modeling competition including 18 teams from around the world. In April 2016, an exploratory survey was fielded: 4074 US respondents completed 20 out of 1560 paired comparisons by choosing between two health descriptions (e.g., longer life span vs. better health). The exploratory data were distributed to all teams. By July, eight teams had submitted their predictions for 1600 additional pairs and described their analytical approach. After these predictions had been posted online, a confirmatory survey was fielded (4148 additional respondents). The victorious team, "Discreetly Charming Econometricians," led by Michał Jakubczyk, achieved the smallest χ 2 , 4391.54 (a predefined criterion). Its primary scientific findings were that different models performed better with different pairs, that the value of life span is not constant proportional, and that logit models have poor predictive validity in health valuation. The results demonstrated the diversity and potential of new analytical approaches in health preference research and highlighted the importance of predictive validity in health valuation. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  9. Cardiac data mining (CDM); organization and predictive analytics on biomedical (cardiac) data

    NASA Astrophysics Data System (ADS)

    Bilal, M. Musa; Hussain, Masood; Basharat, Iqra; Fatima, Mamuna

    2013-10-01

    Data mining and data analytics has been of immense importance to many different fields as we witness the evolution of data sciences over recent years. Biostatistics and Medical Informatics has proved to be the foundation of many modern biological theories and analysis techniques. These are the fields which applies data mining practices along with statistical models to discover hidden trends from data that comprises of biological experiments or procedures on different entities. The objective of this research study is to develop a system for the efficient extraction, transformation and loading of such data from cardiologic procedure reports given by Armed Forces Institute of Cardiology. It also aims to devise a model for the predictive analysis and classification of this data to some important classes as required by cardiologists all around the world. This includes predicting patient impressions and other important features.

  10. Development and Validation of an Empiric Tool to Predict Favorable Neurologic Outcomes Among PICU Patients.

    PubMed

    Gupta, Punkaj; Rettiganti, Mallikarjuna; Gossett, Jeffrey M; Daufeldt, Jennifer; Rice, Tom B; Wetzel, Randall C

    2018-01-01

    To create a novel tool to predict favorable neurologic outcomes during ICU stay among children with critical illness. Logistic regression models using adaptive lasso methodology were used to identify independent factors associated with favorable neurologic outcomes. A mixed effects logistic regression model was used to create the final prediction model including all predictors selected from the lasso model. Model validation was performed using a 10-fold internal cross-validation approach. Virtual Pediatric Systems (VPS, LLC, Los Angeles, CA) database. Patients less than 18 years old admitted to one of the participating ICUs in the Virtual Pediatric Systems database were included (2009-2015). None. A total of 160,570 patients from 90 hospitals qualified for inclusion. Of these, 1,675 patients (1.04%) were associated with a decline in Pediatric Cerebral Performance Category scale by at least 2 between ICU admission and ICU discharge (unfavorable neurologic outcome). The independent factors associated with unfavorable neurologic outcome included higher weight at ICU admission, higher Pediatric Index of Morality-2 score at ICU admission, cardiac arrest, stroke, seizures, head/nonhead trauma, use of conventional mechanical ventilation and high-frequency oscillatory ventilation, prolonged hospital length of ICU stay, and prolonged use of mechanical ventilation. The presence of chromosomal anomaly, cardiac surgery, and utilization of nitric oxide were associated with favorable neurologic outcome. The final online prediction tool can be accessed at https://soipredictiontool.shinyapps.io/GNOScore/. Our model predicted 139,688 patients with favorable neurologic outcomes in an internal validation sample when the observed number of patients with favorable neurologic outcomes was among 139,591 patients. The area under the receiver operating curve for the validation model was 0.90. This proposed prediction tool encompasses 20 risk factors into one probability to predict

  11. Big Data and Predictive Analytics: Applications in the Care of Children.

    PubMed

    Suresh, Srinivasan

    2016-04-01

    Emerging changes in the United States' healthcare delivery model have led to renewed interest in data-driven methods for managing quality of care. Analytics (Data plus Information) plays a key role in predictive risk assessment, clinical decision support, and various patient throughput measures. This article reviews the application of a pediatric risk score, which is integrated into our hospital's electronic medical record, and provides an early warning sign for clinical deterioration. Dashboards that are a part of disease management systems, are a vital tool in peer benchmarking, and can help in reducing unnecessary variations in care. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. A predictive analytic model for the solar modulation of cosmic rays

    DOE PAGES

    Cholis, Ilias; Hooper, Dan; Linden, Tim

    2016-02-23

    An important factor limiting our ability to understand the production and propagation of cosmic rays pertains to the effects of heliospheric forces, commonly known as solar modulation. The solar wind is capable of generating time- and charge-dependent effects on the spectrum and intensity of low-energy (≲10 GeV) cosmic rays reaching Earth. Previous analytic treatments of solar modulation have utilized the force-field approximation, in which a simple potential is adopted whose amplitude is selected to best fit the cosmic-ray data taken over a given period of time. Making use of recently available cosmic-ray data from the Voyager 1 spacecraft, along withmore » measurements of the heliospheric magnetic field and solar wind, we construct a time-, charge- and rigidity-dependent model of solar modulation that can be directly compared to data from a variety of cosmic-ray experiments. Here, we provide a simple analytic formula that can be easily utilized in a variety of applications, allowing us to better predict the effects of solar modulation and reduce the number of free parameters involved in cosmic-ray propagation models.« less

  13. Analytical methods to predict liquid congealing in ram air heat exchangers during cold operation

    NASA Astrophysics Data System (ADS)

    Coleman, Kenneth; Kosson, Robert

    1989-07-01

    Ram air heat exchangers used to cool liquids such as lube oils or Ethylene-Glycol/water solutions can be subject to congealing in very cold ambients, resulting in a loss of cooling capability. Two-dimensional, transient analytical models have been developed to explore this phenomenon with both continuous and staggered fin cores. Staggered fin predictions are compared to flight test data from the E-2C Allison T56 engine lube oil system during winter conditions. For simpler calculations, a viscosity ratio correction was introduced and found to provide reasonable cold ambient performance predictions for the staggered fin core, using a one-dimensional approach.

  14. Unsteady Aerodynamic Validation Experiences From the Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Chawlowski, Pawel

    2014-01-01

    The AIAA Aeroelastic Prediction Workshop (AePW) was held in April 2012, bringing together communities of aeroelasticians, computational fluid dynamicists and experimentalists. The extended objective was to assess the state of the art in computational aeroelastic methods as practical tools for the prediction of static and dynamic aeroelastic phenomena. As a step in this process, workshop participants analyzed unsteady aerodynamic and weakly-coupled aeroelastic cases. Forced oscillation and unforced system experiments and computations have been compared for three configurations. This paper emphasizes interpretation of the experimental data, computational results and their comparisons from the perspective of validation of unsteady system predictions. The issues examined in detail are variability introduced by input choices for the computations, post-processing, and static aeroelastic modeling. The final issue addressed is interpreting unsteady information that is present in experimental data that is assumed to be steady, and the resulting consequences on the comparison data sets.

  15. Development and validation of a predictive equation for lean body mass in children and adolescents.

    PubMed

    Foster, Bethany J; Platt, Robert W; Zemel, Babette S

    2012-05-01

    Lean body mass (LBM) is not easy to measure directly in the field or clinical setting. Equations to predict LBM from simple anthropometric measures, which account for the differing contributions of fat and lean to body weight at different ages and levels of adiposity, would be useful to both human biologists and clinicians. To develop and validate equations to predict LBM in children and adolescents across the entire range of the adiposity spectrum. Dual energy X-ray absorptiometry was used to measure LBM in 836 healthy children (437 females) and linear regression was used to develop sex-specific equations to estimate LBM from height, weight, age, body mass index (BMI) for age z-score and population ancestry. Equations were validated using bootstrapping methods and in a local independent sample of 332 children and in national data collected by NHANES. The mean difference between measured and predicted LBM was - 0.12% (95% limits of agreement - 11.3% to 8.5%) for males and - 0.14% ( - 11.9% to 10.9%) for females. Equations performed equally well across the entire adiposity spectrum, as estimated by BMI z-score. Validation indicated no over-fitting. LBM was predicted within 5% of measured LBM in the validation sample. The equations estimate LBM accurately from simple anthropometric measures.

  16. Analytical ice shape predictions for flight in natural icing conditions

    NASA Technical Reports Server (NTRS)

    Berkowitz, Brian M.; Riley, James T.

    1988-01-01

    LEWICE is an analytical ice prediction code that has been evaluated against icing tunnel data, but on a more limited basis against flight data. Ice shapes predicted by LEWICE is compared with experimental ice shapes accreted on the NASA Lewis Icing Research Aircraft. The flight data selected for comparison includes liquid water content recorded using a hot wire device and droplet distribution data from a laser spectrometer; the ice shape is recorded using stereo photography. The main findings are as follows: (1) An equivalent sand grain roughness correlation different from that used for LEWICE tunnel comparisons must be employed to obtain satisfactory results for flight; (2) Using this correlation and making no other changes in the code, the comparisons to ice shapes accreted in flight are in general as good as the comparisons to ice shapes accreted in the tunnel (as in the case of tunnel ice shapes, agreement is least reliable for large glaze ice shapes at high angles of attack); (3) In some cases comparisons can be somewhat improved by utilizing the code so as to take account of the variation of parameters such as liquid water content, which may vary significantly in flight.

  17. Validating a Predictive Model of Acute Advanced Imaging Biomarkers in Ischemic Stroke.

    PubMed

    Bivard, Andrew; Levi, Christopher; Lin, Longting; Cheng, Xin; Aviv, Richard; Spratt, Neil J; Lou, Min; Kleinig, Tim; O'Brien, Billy; Butcher, Kenneth; Zhang, Jingfen; Jannes, Jim; Dong, Qiang; Parsons, Mark

    2017-03-01

    Advanced imaging to identify tissue pathophysiology may provide more accurate prognostication than the clinical measures used currently in stroke. This study aimed to derive and validate a predictive model for functional outcome based on acute clinical and advanced imaging measures. A database of prospectively collected sub-4.5 hour patients with ischemic stroke being assessed for thrombolysis from 5 centers who had computed tomographic perfusion and computed tomographic angiography before a treatment decision was assessed. Individual variable cut points were derived from a classification and regression tree analysis. The optimal cut points for each assessment variable were then used in a backward logic regression to predict modified Rankin scale (mRS) score of 0 to 1 and 5 to 6. The variables remaining in the models were then assessed using a receiver operating characteristic curve analysis. Overall, 1519 patients were included in the study, 635 in the derivation cohort and 884 in the validation cohort. The model was highly accurate at predicting mRS score of 0 to 1 in all patients considered for thrombolysis therapy (area under the curve [AUC] 0.91), those who were treated (AUC 0.88) and those with recanalization (AUC 0.89). Next, the model was highly accurate at predicting mRS score of 5 to 6 in all patients considered for thrombolysis therapy (AUC 0.91), those who were treated (0.89) and those with recanalization (AUC 0.91). The odds ratio of thrombolysed patients who met the model criteria achieving mRS score of 0 to 1 was 17.89 (4.59-36.35, P <0.001) and for mRS score of 5 to 6 was 8.23 (2.57-26.97, P <0.001). This study has derived and validated a highly accurate model at predicting patient outcome after ischemic stroke. © 2017 American Heart Association, Inc.

  18. The Reliability and Predictive Validity of the Stalking Risk Profile.

    PubMed

    McEwan, Troy E; Shea, Daniel E; Daffern, Michael; MacKenzie, Rachel D; Ogloff, James R P; Mullen, Paul E

    2018-03-01

    This study assessed the reliability and validity of the Stalking Risk Profile (SRP), a structured measure for assessing stalking risks. The SRP was administered at the point of assessment or retrospectively from file review for 241 adult stalkers (91% male) referred to a community-based forensic mental health service. Interrater reliability was high for stalker type, and moderate-to-substantial for risk judgments and domain scores. Evidence for predictive validity and discrimination between stalking recidivists and nonrecidivists for risk judgments depended on follow-up duration. Discrimination was moderate (area under the curve = 0.66-0.68) and positive and negative predictive values good over the full follow-up period ( Mdn = 170.43 weeks). At 6 months, discrimination was better than chance only for judgments related to stalking of new victims (area under the curve = 0.75); however, high-risk stalkers still reoffended against their original victim(s) 2 to 4 times as often as low-risk stalkers. Implications for the clinical utility and refinement of the SRP are discussed.

  19. Test-Retest Reliability and Predictive Validity of the Implicit Association Test in Children

    ERIC Educational Resources Information Center

    Rae, James R.; Olson, Kristina R.

    2018-01-01

    The Implicit Association Test (IAT) is increasingly used in developmental research despite minimal evidence of whether children's IAT scores are reliable across time or predictive of behavior. When test-retest reliability and predictive validity have been assessed, the results have been mixed, and because these studies have differed on many…

  20. A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules

    PubMed Central

    Ramakrishnan, Sridhar; Wesensten, Nancy J.; Balkin, Thomas J.; Reifman, Jaques

    2016-01-01

    Study Objectives: Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss—from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges—and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. Methods: We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. Results: The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. Conclusions: The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. Citation: Ramakrishnan S, Wesensten NJ, Balkin TJ, Reifman J. A unified model of performance: validation of its predictions across different sleep/wake schedules. SLEEP 2016;39(1):249–262. PMID:26518594

  1. Validity of the Student Risk Screening Scale: Evidence of Predictive Validity in a Diverse, Suburban Elementary Setting

    ERIC Educational Resources Information Center

    Menzies, Holly M.; Lane, Kathleen Lynne

    2012-01-01

    In this study the authors examined the psychometric properties of the "Student Risk Screening Scale" (SRSS), including predictive validity in terms of student outcomes in behavioral and academic domains. The school, a diverse, suburban school in Southern California, administered the SRSS at three time points as part of regular school…

  2. Development and External Validation of a Melanoma Risk Prediction Model Based on Self-assessed Risk Factors.

    PubMed

    Vuong, Kylie; Armstrong, Bruce K; Weiderpass, Elisabete; Lund, Eiliv; Adami, Hans-Olov; Veierod, Marit B; Barrett, Jennifer H; Davies, John R; Bishop, D Timothy; Whiteman, David C; Olsen, Catherine M; Hopper, John L; Mann, Graham J; Cust, Anne E; McGeechan, Kevin

    2016-08-01

    Identifying individuals at high risk of melanoma can optimize primary and secondary prevention strategies. To develop and externally validate a risk prediction model for incident first-primary cutaneous melanoma using self-assessed risk factors. We used unconditional logistic regression to develop a multivariable risk prediction model. Relative risk estimates from the model were combined with Australian melanoma incidence and competing mortality rates to obtain absolute risk estimates. A risk prediction model was developed using the Australian Melanoma Family Study (629 cases and 535 controls) and externally validated using 4 independent population-based studies: the Western Australia Melanoma Study (511 case-control pairs), Leeds Melanoma Case-Control Study (960 cases and 513 controls), Epigene-QSkin Study (44 544, of which 766 with melanoma), and Swedish Women's Lifestyle and Health Cohort Study (49 259 women, of which 273 had melanoma). We validated model performance internally and externally by assessing discrimination using the area under the receiver operating curve (AUC). Additionally, using the Swedish Women's Lifestyle and Health Cohort Study, we assessed model calibration and clinical usefulness. The risk prediction model included hair color, nevus density, first-degree family history of melanoma, previous nonmelanoma skin cancer, and lifetime sunbed use. On internal validation, the AUC was 0.70 (95% CI, 0.67-0.73). On external validation, the AUC was 0.66 (95% CI, 0.63-0.69) in the Western Australia Melanoma Study, 0.67 (95% CI, 0.65-0.70) in the Leeds Melanoma Case-Control Study, 0.64 (95% CI, 0.62-0.66) in the Epigene-QSkin Study, and 0.63 (95% CI, 0.60-0.67) in the Swedish Women's Lifestyle and Health Cohort Study. Model calibration showed close agreement between predicted and observed numbers of incident melanomas across all deciles of predicted risk. In the external validation setting, there was higher net benefit when using the risk prediction

  3. Big data analytics to improve cardiovascular care: promise and challenges.

    PubMed

    Rumsfeld, John S; Joynt, Karen E; Maddox, Thomas M

    2016-06-01

    The potential for big data analytics to improve cardiovascular quality of care and patient outcomes is tremendous. However, the application of big data in health care is at a nascent stage, and the evidence to date demonstrating that big data analytics will improve care and outcomes is scant. This Review provides an overview of the data sources and methods that comprise big data analytics, and describes eight areas of application of big data analytics to improve cardiovascular care, including predictive modelling for risk and resource use, population management, drug and medical device safety surveillance, disease and treatment heterogeneity, precision medicine and clinical decision support, quality of care and performance measurement, and public health and research applications. We also delineate the important challenges for big data applications in cardiovascular care, including the need for evidence of effectiveness and safety, the methodological issues such as data quality and validation, and the critical importance of clinical integration and proof of clinical utility. If big data analytics are shown to improve quality of care and patient outcomes, and can be successfully implemented in cardiovascular practice, big data will fulfil its potential as an important component of a learning health-care system.

  4. Pre-trial inter-laboratory analytical validation of the FOCUS4 personalised therapy trial.

    PubMed

    Richman, Susan D; Adams, Richard; Quirke, Phil; Butler, Rachel; Hemmings, Gemma; Chambers, Phil; Roberts, Helen; James, Michelle D; Wozniak, Sue; Bathia, Riya; Pugh, Cheryl; Maughan, Timothy; Jasani, Bharat

    2016-01-01

    Molecular characterisation of tumours is increasing personalisation of cancer therapy, tailored to an individual and their cancer. FOCUS4 is a molecularly stratified clinical trial for patients with advanced colorectal cancer. During an initial 16-week period of standard first-line chemotherapy, tumour tissue will undergo several molecular assays, with the results used for cohort allocation, then randomisation. Laboratories in Leeds and Cardiff will perform the molecular testing. The results of a rigorous pre-trial inter-laboratory analytical validation are presented and discussed. Wales Cancer Bank supplied FFPE tumour blocks from 97 mCRC patients with consent for use in further research. Both laboratories processed each sample according to an agreed definitive FOCUS4 laboratory protocol, reporting results directly to the MRC Trial Management Group for independent cross-referencing. Pyrosequencing analysis of mutation status at KRAS codons12/13/61/146, NRAS codons12/13/61, BRAF codon600 and PIK3CA codons542/545/546/1047, generated highly concordant results. Two samples gave discrepant results; in one a PIK3CA mutation was detected only in Leeds, and in the other, a PIK3CA mutation was only detected in Cardiff. pTEN and mismatch repair (MMR) protein expression was assessed by immunohistochemistry (IHC) resulting in 6/97 discordant results for pTEN and 5/388 for MMR, resolved upon joint review. Tumour heterogeneity was likely responsible for pyrosequencing discrepancies. The presence of signet-ring cells, necrosis, mucin, edge-effects and over-counterstaining influenced IHC discrepancies. Pre-trial assay analytical validation is essential to ensure appropriate selection of patients for targeted therapies. This is feasible for both mutation testing and immunohistochemical assays and must be built into the workup of such trials. ISRCTN90061564. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to

  5. Development and validation of a multi-analyte method for the regulatory control of carotenoids used as feed additives in fish and poultry feed.

    PubMed

    Vincent, Ursula; Serano, Federica; von Holst, Christoph

    2017-08-01

    Carotenoids are used in animal nutrition mainly as sensory additives that favourably affect the colour of fish, birds and food of animal origin. Various analytical methods exist for their quantification in compound feed, reflecting the different physico-chemical characteristics of the carotenoid and the corresponding feed additives. They may be natural products or specific formulations containing the target carotenoids produced by chemical synthesis. In this study a multi-analyte method was developed that can be applied to the determination of all 10 carotenoids currently authorised within the European Union for compound feedingstuffs. The method functions regardless of whether the carotenoids have been added to the compound feed via natural products or specific formulations. It is comprised of three steps: (1) digestion of the feed sample with an enzyme; (2) pressurised liquid extraction; and (3) quantification of the analytes by reversed-phase HPLC coupled to a photodiode array detector in the visible range. The method was single-laboratory validated for poultry and fish feed covering a mass fraction range of the target analyte from 2.5 to 300 mg kg - 1 . The following method performance characteristics were obtained: the recovery rate varied from 82% to 129% and precision expressed as the relative standard deviation of intermediate precision varied from 1.6% to 15%. Based on the acceptable performance obtained in the validation study, the multi-analyte method is considered fit for the intended purpose.

  6. Understanding Interrater Reliability and Validity of Risk Assessment Tools Used to Predict Adverse Clinical Events.

    PubMed

    Siedlecki, Sandra L; Albert, Nancy M

    This article will describe how to assess interrater reliability and validity of risk assessment tools, using easy-to-follow formulas, and to provide calculations that demonstrate principles discussed. Clinical nurse specialists should be able to identify risk assessment tools that provide high-quality interrater reliability and the highest validity for predicting true events of importance to clinical settings. Making best practice recommendations for assessment tool use is critical to high-quality patient care and safe practices that impact patient outcomes and nursing resources. Optimal risk assessment tool selection requires knowledge about interrater reliability and tool validity. The clinical nurse specialist will understand the reliability and validity issues associated with risk assessment tools, and be able to evaluate tools using basic calculations. Risk assessment tools are developed to objectively predict quality and safety events and ultimately reduce the risk of event occurrence through preventive interventions. To ensure high-quality tool use, clinical nurse specialists must critically assess tool properties. The better the tool's ability to predict adverse events, the more likely that event risk is mediated. Interrater reliability and validity assessment is relatively an easy skill to master and will result in better decisions when selecting or making recommendations for risk assessment tool use.

  7. Wetting boundary condition for the color-gradient lattice Boltzmann method: Validation with analytical and experimental data

    NASA Astrophysics Data System (ADS)

    Akai, Takashi; Bijeljic, Branko; Blunt, Martin J.

    2018-06-01

    In the color gradient lattice Boltzmann model (CG-LBM), a fictitious-density wetting boundary condition has been widely used because of its ease of implementation. However, as we show, this may lead to inaccurate results in some cases. In this paper, a new scheme for the wetting boundary condition is proposed which can handle complicated 3D geometries. The validity of our method for static problems is demonstrated by comparing the simulated results to analytical solutions in 2D and 3D geometries with curved boundaries. Then, capillary rise simulations are performed to study dynamic problems where the three-phase contact line moves. The results are compared to experimental results in the literature (Heshmati and Piri, 2014). If a constant contact angle is assumed, the simulations agree with the analytical solution based on the Lucas-Washburn equation. However, to match the experiments, we need to implement a dynamic contact angle that varies with the flow rate.

  8. CFD Validation Studies for Hypersonic Flow Prediction

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2001-01-01

    A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N2 flow over a hollow cylinder-flare with 30 degree flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 degrees and aft-cone angle of 55 degrees. Both sets of experiments involve 30 degree compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.

  9. CFD Validation Studies for Hypersonic Flow Prediction

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2001-01-01

    A series of experiments to measure pressure and heating for code validation involving hypersonic, laminar, separated flows was conducted at the Calspan-University at Buffalo Research Center (CUBRC) in the Large Energy National Shock (LENS) tunnel. The experimental data serves as a focus for a code validation session but are not available to the authors until the conclusion of this session. The first set of experiments considered here involve Mach 9.5 and Mach 11.3 N, flow over a hollow cylinder-flare with 30 deg flare angle at several Reynolds numbers sustaining laminar, separated flow. Truncated and extended flare configurations are considered. The second set of experiments, at similar conditions, involves flow over a sharp, double cone with fore-cone angle of 25 deg and aft-cone angle of 55 deg. Both sets of experiments involve 30 deg compressions. Location of the separation point in the numerical simulation is extremely sensitive to the level of grid refinement in the numerical predictions. The numerical simulations also show a significant influence of Reynolds number on extent of separation. Flow unsteadiness was easily introduced into the double cone simulations using aggressive relaxation parameters that normally promote convergence.

  10. Development and validation of the ORACLE score to predict risk of osteoporosis.

    PubMed

    Richy, Florent; Deceulaer, Fréderic; Ethgen, Olivier; Bruyère, Olivier; Reginster, Jean-Yves

    2004-11-01

    To develop and validate a composite index, the Osteoporosis Risk Assessment by Composite Linear Estimate (ORACLE), that includes risk factors and ultrasonometric outcomes to screen for osteoporosis. Two cohorts of postmenopausal women aged 45 years and older participated in the development (n = 407) and the validation (n = 202) of ORACLE. Their bone mineral density was determined by dual energy x-ray absorptiometry and quantitative ultrasonometry (QUS), and their historical and clinical risk factors were assessed (January to June 2003). Logistic regression analysis was used to select significant predictors of bone mineral density, whereas receiver operating characteristic (ROC) analysis was used to assess the discriminatory performance of ORACLE. The final logistic regression model retained 4 biometric or historical variables and 1 ultrasonometric outcome. The ROC areas under the curves (AUCs) for ORACLE were 84% for the prediction of osteoporosis and 78% for low bone mass. A sensitivity of 90% corresponded to a specificity of 50% for identification of women at risk of developing osteoporosis. The corresponding positive and negative predictive values were 86% and 54%, respectively, in the development cohort. In the validation cohort, the AUCs for identification of osteoporosis and low bone mass were 81% and 76% for ORACLE, 69% and 64% for QUS T score, 71% and 68% for QUS ultrasonometric bone profile index, and 76% and 75% for Osteoporosis Self-assessment Tool, respectively. ORACLE had the best discriminatory performance in identifying osteoporosis compared with the other approaches (P < .05). ORACLE exhibited the highest discriminatory properties compared with ultrasonography alone or other previously validated risk indices. It may be helpful to enhance the predictive value of QUS.

  11. Predictive validity of cannabis consumption measures: Results from a national longitudinal study.

    PubMed

    Buu, Anne; Hu, Yi-Han; Pampati, Sanjana; Arterberry, Brooke J; Lin, Hsien-Chang

    2017-10-01

    Validating the utility of cannabis consumption measures for predicting later cannabis related symptomatology or progression to cannabis use disorder (CUD) is crucial for prevention and intervention work that may use consumption measures for quick screening. This study examined whether cannabis use quantity and frequency predicted CUD symptom counts, progression to onset of CUD, and persistence of CUD. Data from the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) at Wave 1 (2001-2002) and Wave 2 (2004-2005) were used to identify three risk samples: (1) current cannabis users at Wave 1 who were at risk for having CUD symptoms at Wave 2; (2) current users without lifetime CUD who were at risk for incident CUD; and (3) current users with past-year CUD who were at risk for persistent CUD. Logistic regression and zero-inflated Poisson models were used to examine the longitudinal effect of cannabis consumption on CUD outcomes. Higher frequency of cannabis use predicted lower likelihood of being symptom-free but it did not predict the severity of CUD symptomatology. Higher frequency of cannabis use also predicted higher likelihood of progression to onset of CUD and persistence of CUD. Cannabis use quantity, however, did not predict any of the developmental stages of CUD symptomatology examined in this study. This study has provided a new piece of evidence to support the predictive validity of cannabis use frequency based on national longitudinal data. The result supports the common practice of including frequency items in cannabis screening tools. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Analytical Prediction of the Seismic Response of a Reinforced Concrete Containment Vessel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James, R.J.; Rashid, Y.R.; Cherry, J.L.

    Under the sponsorship of the Ministry of International Trade and Industry (MITI) of Japan, the Nuclear Power Engineering Corporation (NUPEC) is investigating the seismic behavior of a Reinforced Concrete Containment Vessel (RCCV) through scale-model testing using the high-performance shaking table at the Tadotsu Engineering Laboratory. A series of tests representing design-level seismic ground motions was initially conducted to gather valuable experimental measurements for use in design verification. Additional tests will be conducted with increasing amplifications of the seismic input until a structural failure of the test model occurs. In a cooperative program with NUPEC, the US Nuclear Regulatory Commission (USNRC),more » through Sandia National Laboratories (SNL), is conducting analytical research on the seismic behavior of RCCV structures. As part of this program, pretest analytical predictions of the model tests are being performed. The dynamic time-history analysis utilizes a highly detailed concrete constitutive model applied to a three-dimensional finite element representation of the test structure. This paper describes the details of the analysis model and provides analysis results.« less

  13. Statistical Learning Theory for High Dimensional Prediction: Application to Criterion-Keyed Scale Development

    PubMed Central

    Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul

    2016-01-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257

  14. Statistical learning theory for high dimensional prediction: Application to criterion-keyed scale development.

    PubMed

    Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R

    2016-12-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Validation of predictive rules and indices of severity for community acquired pneumonia

    PubMed Central

    Ewig, S; de Roux, A; Bauer, T; Garcia, E; Mensa, J; Niederman, M; Torres, A

    2004-01-01

    Background: A study was undertaken to validate the modified American Thoracic Society (ATS) rule and two British Thoracic Society (BTS) rules for the prediction of ICU admission and mortality of community acquired pneumonia and to provide a validation of these predictions on the basis of the pneumonia severity index (PSI). Method: Six hundred and ninety six consecutive patients (457 men (66%), mean (SD) age 67.8 (17.1) years, range 18–101) admitted to a tertiary care hospital were studied prospectively. Of these, 116 (16.7%) were admitted to the ICU. Results: The modified ATS rule achieved a sensitivity of 69% (95% CI 50.7 to 77.2), specificity of 97% (95% CI 96.4 to 98.9), positive predictive value of 87% (95% CI 78.3 to 93.1), and negative predictive value of 94% (95% CI 91.8 to 95.8) in predicting admission to the ICU. The corresponding predictive indices for mortality were 94% (95% CI 82.5 to 98.7), 93% (95% CI 90.6 to 94.7), 49% (95% CI 38.2 to 59.7), and 99.5% (95% CI 98.5 to 99.9), respectively. These figures compared favourably with both the BTS rules. The BTS-CURB criteria achieved predictions of pneumonia severity and mortality comparable to the PSI. Conclusions: This study confirms the power of the modified ATS rule to predict severe pneumonia in individual patients. It may be incorporated into current guidelines for the assessment of pneumonia severity. The CURB criteria may be used as an alternative tool to PSI for the detection of low risk patients. PMID:15115872

  16. Novel predictive models for metabolic syndrome risk: a "big data" analytic approach.

    PubMed

    Steinberg, Gregory B; Church, Bruce W; McCall, Carol J; Scott, Adam B; Kalis, Brian P

    2014-06-01

    We applied a proprietary "big data" analytic platform--Reverse Engineering and Forward Simulation (REFS)--to dimensions of metabolic syndrome extracted from a large data set compiled from Aetna's databases for 1 large national customer. Our goals were to accurately predict subsequent risk of metabolic syndrome and its various factors on both a population and individual level. The study data set included demographic, medical claim, pharmacy claim, laboratory test, and biometric screening results for 36,944 individuals. The platform reverse-engineered functional models of systems from diverse and large data sources and provided a simulation framework for insight generation. The platform interrogated data sets from the results of 2 Comprehensive Metabolic Syndrome Screenings (CMSSs) as well as complete coverage records; complete data from medical claims, pharmacy claims, and lab results for 2010 and 2011; and responses to health risk assessment questions. The platform predicted subsequent risk of metabolic syndrome, both overall and by risk factor, on population and individual levels, with ROC/AUC varying from 0.80 to 0.88. We demonstrated that improving waist circumference and blood glucose yielded the largest benefits on subsequent risk and medical costs. We also showed that adherence to prescribed medications and, particularly, adherence to routine scheduled outpatient doctor visits, reduced subsequent risk. The platform generated individualized insights using available heterogeneous data within 3 months. The accuracy and short speed to insight with this type of analytic platform allowed Aetna to develop targeted cost-effective care management programs for individuals with or at risk for metabolic syndrome.

  17. Predictive validity and correlates of self-assessed resilience among U.S. Army soldiers.

    PubMed

    Campbell-Sills, Laura; Kessler, Ronald C; Ursano, Robert J; Sun, Xiaoying; Taylor, Charles T; Heeringa, Steven G; Nock, Matthew K; Sampson, Nancy A; Jain, Sonia; Stein, Murray B

    2018-02-01

    Self-assessment of resilience could prove valuable to military and other organizations whose personnel confront foreseen stressors. We evaluated the validity of self-assessed resilience among U.S. Army soldiers, including whether predeployment perceived resilience predicted postdeployment emotional disorder. Resilience was assessed via self-administered questionnaire among new soldiers reporting for basic training (N = 35,807) and experienced soldiers preparing to deploy to Afghanistan (N = 8,558). Concurrent validity of self-assessed resilience was evaluated among recruits by estimating its association with past-month emotional disorder. Predictive validity was examined among 3,526 experienced soldiers with no lifetime emotional disorder predeployment. Predictive models estimated associations of predeployment resilience with incidence of emotional disorder through 9 months postdeployment and with marked improvement in coping at 3 months postdeployment. Weights-adjusted regression models incorporated stringent controls for risk factors. Soldiers characterized themselves as very resilient on average [M = 14.34, SD = 4.20 (recruits); M = 14.75, SD = 4.31 (experienced soldiers); theoretical range = 0-20]. Demographic characteristics exhibited only modest associations with resilience, while severity of childhood maltreatment was negatively associated with resilience in both samples. Among recruits, resilience was inversely associated with past-month emotional disorder [adjusted odds ratio (AOR) = 0.65, 95% CI = 0.62-0.68, P < .0005 (per standard score increase)]. Among deployed soldiers, greater predeployment resilience was associated with decreased incidence of emotional disorder (AOR = 0.91; 95% CI = 0.84-0.98; P = .016) and increased odds of improved coping (AOR = 1.36; 95% CI = 1.24-1.49; P < .0005) postdeployment. Findings supported validity of self-assessed resilience among soldiers, although its predictive effect on incidence of

  18. Predicting the chance of vaginal delivery after one cesarean section: validation and elaboration of a published prediction model.

    PubMed

    Fagerberg, Marie C; Maršál, Karel; Källén, Karin

    2015-05-01

    We aimed to validate a widely used US prediction model for vaginal birth after cesarean (Grobman et al. [8]) and modify it to suit Swedish conditions. Women having experienced one cesarean section and at least one subsequent delivery (n=49,472) in the Swedish Medical Birth Registry 1992-2011 were randomly divided into two data sets. In the development data set, variables associated with successful trial of labor were identified using multiple logistic regression. The predictive ability of the estimates previously published by Grobman et al., and of our modified and new estimates, respectively, was then evaluated using the validation data set. The accuracy of the models for prediction of vaginal birth after cesarean was measured by area under the receiver operating characteristics curve. For maternal age, body mass index, prior vaginal delivery, and prior labor arrest, the odds ratio estimates for vaginal birth after cesarean were similar to those previously published. The prediction accuracy increased when information on indication for the previous cesarean section was added (from area under the receiver operating characteristics curve=0.69-0.71), and increased further when maternal height and delivery unit cesarean section rates were included (area under the receiver operating characteristics curve=0.74). The correlation between the individual predicted vaginal birth after cesarean probability and the observed trial of labor success rate was high in all the respective predicted probability decentiles. Customization of prediction models for vaginal birth after cesarean is of considerable value. Choosing relevant indicators for a Swedish setting made it possible to achieve excellent prediction accuracy for success in trial of labor after cesarean. During the delicate process of counseling about preferred delivery mode after one cesarean section, considering the results of our study may facilitate the choice between a trial of labor or an elective repeat cesarean

  19. Contrasting analytical and data-driven frameworks for radiogenomic modeling of normal tissue toxicities in prostate cancer.

    PubMed

    Coates, James; Jeyaseelan, Asha K; Ybarra, Norma; David, Marc; Faria, Sergio; Souhami, Luis; Cury, Fabio; Duclos, Marie; El Naqa, Issam

    2015-04-01

    We explore analytical and data-driven approaches to investigate the integration of genetic variations (single nucleotide polymorphisms [SNPs] and copy number variations [CNVs]) with dosimetric and clinical variables in modeling radiation-induced rectal bleeding (RB) and erectile dysfunction (ED) in prostate cancer patients. Sixty-two patients who underwent curative hypofractionated radiotherapy (66 Gy in 22 fractions) between 2002 and 2010 were retrospectively genotyped for CNV and SNP rs5489 in the xrcc1 DNA repair gene. Fifty-four patients had full dosimetric profiles. Two parallel modeling approaches were compared to assess the risk of severe RB (Grade⩾3) and ED (Grade⩾1); Maximum likelihood estimated generalized Lyman-Kutcher-Burman (LKB) and logistic regression. Statistical resampling based on cross-validation was used to evaluate model predictive power and generalizability to unseen data. Integration of biological variables xrcc1 CNV and SNP improved the fit of the RB and ED analytical and data-driven models. Cross-validation of the generalized LKB models yielded increases in classification performance of 27.4% for RB and 14.6% for ED when xrcc1 CNV and SNP were included, respectively. Biological variables added to logistic regression modeling improved classification performance over standard dosimetric models by 33.5% for RB and 21.2% for ED models. As a proof-of-concept, we demonstrated that the combination of genetic and dosimetric variables can provide significant improvement in NTCP prediction using analytical and data-driven approaches. The improvement in prediction performance was more pronounced in the data driven approaches. Moreover, we have shown that CNVs, in addition to SNPs, may be useful structural genetic variants in predicting radiation toxicities. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. SU-C-204-01: A Fast Analytical Approach for Prompt Gamma and PET Predictions in a TPS for Proton Range Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroniger, K; Herzog, M; Landry, G

    2015-06-15

    Purpose: We describe and demonstrate a fast analytical tool for prompt-gamma emission prediction based on filter functions applied on the depth dose profile. We present the implementation in a treatment planning system (TPS) of the same algorithm for positron emitter distributions. Methods: The prediction of the desired observable is based on the convolution of filter functions with the depth dose profile. For both prompt-gammas and positron emitters, the results of Monte Carlo simulations (MC) are compared with those of the analytical tool. For prompt-gamma emission from inelastic proton-induced reactions, homogeneous and inhomogeneous phantoms alongside with patient data are used asmore » irradiation targets of mono-energetic proton pencil beams. The accuracy of the tool is assessed in terms of the shape of the analytically calculated depth profiles and their absolute yields, compared to MC. For the positron emitters, the method is implemented in a research RayStation TPS and compared to MC predictions. Digital phantoms and patient data are used and positron emitter spatial density distributions are analyzed. Results: Calculated prompt-gamma profiles agree with MC within 3 % in terms of absolute yield and reproduce the correct shape. Based on an arbitrary reference material and by means of 6 filter functions (one per chemical element), profiles in any other material composed of those elements can be predicted. The TPS implemented algorithm is accurate enough to enable, via the analytically calculated positron emitters profiles, detection of range differences between the TPS and MC with errors of the order of 1–2 mm. Conclusion: The proposed analytical method predicts prompt-gamma and positron emitter profiles which generally agree with the distributions obtained by a full MC. The implementation of the tool in a TPS shows that reliable profiles can be obtained directly from the dose calculated by the TPS, without the need of full MC simulation.« less

  1. Survey of statistical techniques used in validation studies of air pollution prediction models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bornstein, R D; Anderson, S F

    1979-03-01

    Statistical techniques used by meteorologists to validate predictions made by air pollution models are surveyed. Techniques are divided into the following three groups: graphical, tabular, and summary statistics. Some of the practical problems associated with verification are also discussed. Characteristics desired in any validation program are listed and a suggested combination of techniques that possesses many of these characteristics is presented.

  2. Development and validation of classifiers and variable subsets for predicting nursing home admission.

    PubMed

    Nuutinen, Mikko; Leskelä, Riikka-Leena; Suojalehto, Ella; Tirronen, Anniina; Komssi, Vesa

    2017-04-13

    In previous years a substantial number of studies have identified statistically important predictors of nursing home admission (NHA). However, as far as we know, the analyses have been done at the population-level. No prior research has analysed the prediction accuracy of a NHA model for individuals. This study is an analysis of 3056 longer-term home care customers in the city of Tampere, Finland. Data were collected from the records of social and health service usage and RAI-HC (Resident Assessment Instrument - Home Care) assessment system during January 2011 and September 2015. The aim was to find out the most efficient variable subsets to predict NHA for individuals and validate the accuracy. The variable subsets of predicting NHA were searched by sequential forward selection (SFS) method, a variable ranking metric and the classifiers of logistic regression (LR), support vector machine (SVM) and Gaussian naive Bayes (GNB). The validation of the results was guaranteed using randomly balanced data sets and cross-validation. The primary performance metrics for the classifiers were the prediction accuracy and AUC (average area under the curve). The LR and GNB classifiers achieved 78% accuracy for predicting NHA. The most important variables were RAI MAPLE (Method for Assigning Priority Levels), functional impairment (RAI IADL, Activities of Daily Living), cognitive impairment (RAI CPS, Cognitive Performance Scale), memory disorders (diagnoses G30-G32 and F00-F03) and the use of community-based health-service and prior hospital use (emergency visits and periods of care). The accuracy of the classifier for individuals was high enough to convince the officials of the city of Tampere to integrate the predictive model based on the findings of this study as a part of home care information system. Further work need to be done to evaluate variables that are modifiable and responsive to interventions.

  3. Development and validation of a novel predictive scoring model for microvascular invasion in patients with hepatocellular carcinoma.

    PubMed

    Zhao, Hui; Hua, Ye; Dai, Tu; He, Jian; Tang, Min; Fu, Xu; Mao, Liang; Jin, Huihan; Qiu, Yudong

    2017-03-01

    Microvascular invasion (MVI) in patients with hepatocellular carcinoma (HCC) cannot be accurately predicted preoperatively. This study aimed to establish a predictive scoring model of MVI in solitary HCC patients without macroscopic vascular invasion. A total of 309 consecutive HCC patients who underwent curative hepatectomy were divided into the derivation (n=206) and validation cohort (n=103). A predictive scoring model of MVI was established according to the valuable predictors in the derivation cohort based on multivariate logistic regression analysis. The performance of the predictive model was evaluated in the derivation and validation cohorts. Preoperative imaging features on CECT, such as intratumoral arteries, non-nodular type of HCC and absence of radiological tumor capsule were independent predictors for MVI. The predictive scoring model was established according to the β coefficients of the 3 predictors. Area under receiver operating characteristic (AUROC) of the predictive scoring model was 0.872 (95% CI, 0.817-0.928) and 0.856 (95% CI, 0.771-0.940) in the derivation and validation cohorts. The positive and negative predictive values were 76.5% and 88.0% in the derivation cohort and 74.4% and 88.3% in the validation cohort. The performance of the model was similar between the patients with tumor size ≤5cm and >5cm in AUROC (P=0.910). The predictive scoring model based on intratumoral arteries, non-nodular type of HCC, and absence of the radiological tumor capsule on preoperative CECT is of great value in the prediction of MVI regardless of tumor size. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Analytic Cognitive Style Predicts Religious and Paranormal Belief

    ERIC Educational Resources Information Center

    Pennycook, Gordon; Cheyne, James Allan; Seli, Paul; Koehler, Derek J.; Fugelsang, Jonathan A.

    2012-01-01

    An analytic cognitive style denotes a propensity to set aside highly salient intuitions when engaging in problem solving. We assess the hypothesis that an analytic cognitive style is associated with a history of questioning, altering, and rejecting (i.e., unbelieving) supernatural claims, both religious and paranormal. In two studies, we examined…

  5. Experimental Investigation and Analytical Prediction of σ-Phase Precipitation in AISI 316L Austenitic Stainless Steel

    NASA Astrophysics Data System (ADS)

    Sahlaoui, Habib; Sidhom, Habib

    2013-07-01

    The phase precipitation in industrial AISI 316L stainless steel during aging for up to 80,000 hours between 823 K and 1073 K (550 °C and 800 °C) has been studied using transmission electron microscopy, scanning transmission electron microscopy, and carbon replica energy-dispersive X-ray microanalysis. Three phases were identified: Chromium carbides (M23C6), Laves phase ( η), and σ-phase (Fe-Cr). M23C6 carbide precipitation occurred firstly and was followed by the η and σ-phases at grain boundaries when the aging temperature is higher than 873 K (600 °C). Precipitation and growth of M23C6 create chromium depletion zones at the grain boundaries and also retard the σ-phase formation. Thus, the σ-phase is controlled by the kinetic of chromium bulk diffusion and can appear only when the chromium reaches, at grain boundaries and at the M23C6/ γ and M23C6/ η/ γ interfaces, content higher than a critical value obtained by self-healing. An analytical model, based on equivalent chromium content, has been established in this study and successfully validated to predict the time-temperature-precipitation diagram of the σ-phase. The obtained diagram is in good agreement with the experimental results.

  6. VALIDATION OF STANDARD ANALYTICAL PROTOCOL FOR ...

    EPA Pesticide Factsheets

    There is a growing concern with the potential for terrorist use of chemical weapons to cause civilian harm. In the event of an actual or suspected outdoor release of chemically hazardous material in a large area, the extent of contamination must be determined. This requires a system with the ability to prepare and quickly analyze a large number of contaminated samples for the traditional chemical agents, as well as numerous toxic industrial chemicals. Liquid samples (both aqueous and organic), solid samples (e.g., soil), vapor samples (e.g., air) and mixed state samples, all ranging from household items to deceased animals, may require some level of analyses. To meet this challenge, the U.S. Environmental Protection Agency (U.S. EPA) National Homeland Security Research Center, in collaboration with experts from across U.S. EPA and other Federal Agencies, initiated an effort to identify analytical methods for the chemical and biological agents that could be used to respond to a terrorist attack or a homeland security incident. U.S. EPA began development of standard analytical protocols (SAPs) for laboratory identification and measurement of target agents in case of a contamination threat. These methods will be used to help assist in the identification of existing contamination, the effectiveness of decontamination, as well as clearance for the affected population to reoccupy previously contaminated areas. One of the first SAPs developed was for the determin

  7. A collaborative environment for developing and validating predictive tools for protein biophysical characteristics

    NASA Astrophysics Data System (ADS)

    Johnston, Michael A.; Farrell, Damien; Nielsen, Jens Erik

    2012-04-01

    The exchange of information between experimentalists and theoreticians is crucial to improving the predictive ability of theoretical methods and hence our understanding of the related biology. However many barriers exist which prevent the flow of information between the two disciplines. Enabling effective collaboration requires that experimentalists can easily apply computational tools to their data, share their data with theoreticians, and that both the experimental data and computational results are accessible to the wider community. We present a prototype collaborative environment for developing and validating predictive tools for protein biophysical characteristics. The environment is built on two central components; a new python-based integration module which allows theoreticians to provide and manage remote access to their programs; and PEATDB, a program for storing and sharing experimental data from protein biophysical characterisation studies. We demonstrate our approach by integrating PEATSA, a web-based service for predicting changes in protein biophysical characteristics, into PEATDB. Furthermore, we illustrate how the resulting environment aids method development using the Potapov dataset of experimentally measured ΔΔGfold values, previously employed to validate and train protein stability prediction algorithms.

  8. Validation of Skeletal Muscle cis-Regulatory Module Predictions Reveals Nucleotide Composition Bias in Functional Enhancers

    PubMed Central

    Kwon, Andrew T.; Chou, Alice Yi; Arenillas, David J.; Wasserman, Wyeth W.

    2011-01-01

    We performed a genome-wide scan for muscle-specific cis-regulatory modules (CRMs) using three computational prediction programs. Based on the predictions, 339 candidate CRMs were tested in cell culture with NIH3T3 fibroblasts and C2C12 myoblasts for capacity to direct selective reporter gene expression to differentiated C2C12 myotubes. A subset of 19 CRMs validated as functional in the assay. The rate of predictive success reveals striking limitations of computational regulatory sequence analysis methods for CRM discovery. Motif-based methods performed no better than predictions based only on sequence conservation. Analysis of the properties of the functional sequences relative to inactive sequences identifies nucleotide sequence composition can be an important characteristic to incorporate in future methods for improved predictive specificity. Muscle-related TFBSs predicted within the functional sequences display greater sequence conservation than non-TFBS flanking regions. Comparison with recent MyoD and histone modification ChIP-Seq data supports the validity of the functional regions. PMID:22144875

  9. Clinical judgement in the era of big data and predictive analytics.

    PubMed

    Chin-Yee, Benjamin; Upshur, Ross

    2018-06-01

    Clinical judgement is a central and longstanding issue in the philosophy of medicine which has generated significant interest over the past few decades. In this article, we explore different approaches to clinical judgement articulated in the literature, focusing in particular on data-driven, mathematical approaches which we contrast with narrative, virtue-based approaches to clinical reasoning. We discuss the tension between these different clinical epistemologies and further explore the implications of big data and machine learning for a philosophy of clinical judgement. We argue for a pluralistic, integrative approach, and demonstrate how narrative, virtue-based clinical reasoning will remain indispensable in an era of big data and predictive analytics. © 2017 John Wiley & Sons, Ltd.

  10. Investigation of the validity of radiosity for sound-field prediction in cubic rooms

    NASA Astrophysics Data System (ADS)

    Nosal, Eva-Marie; Hodgson, Murray; Ashdown, Ian

    2004-12-01

    This paper explores acoustical (or time-dependent) radiosity using predictions made in four cubic enclosures. The methods and algorithms used are those presented in a previous paper by the same authors [Nosal, Hodgson, and Ashdown, J. Acoust. Soc. Am. 116(2), 970-980 (2004)]. First, the algorithm, methods, and conditions for convergence are investigated by comparison of numerous predictions for the four cubic enclosures. Here, variables and parameters used in the predictions are varied to explore the effect of absorption distribution, the necessary conditions for convergence of the numerical solution to the analytical solution, form-factor prediction methods, and the computational requirements. The predictions are also used to investigate the effect of absorption distribution on sound fields in cubic enclosures with diffusely reflecting boundaries. Acoustical radiosity is then compared to predictions made in the four enclosures by a ray-tracing model that can account for diffuse reflection. Comparisons are made of echograms, room-acoustical parameters, and discretized echograms. .

  11. Investigation of the validity of radiosity for sound-field prediction in cubic rooms.

    PubMed

    Nosal, Eva-Marie; Hodgson, Murray; Ashdown, Ian

    2004-12-01

    This paper explores acoustical (or time-dependent) radiosity using predictions made in four cubic enclosures. The methods and algorithms used are those presented in a previous paper by the same authors [Nosal, Hodgson, and Ashdown, J. Acoust. Soc. Am. 116(2), 970-980 (2004)]. First, the algorithm, methods, and conditions for convergence are investigated by comparison of numerous predictions for the four cubic enclosures. Here, variables and parameters used in the predictions are varied to explore the effect of absorption distribution, the necessary conditions for convergence of the numerical solution to the analytical solution, form-factor prediction methods, and the computational requirements. The predictions are also used to investigate the effect of absorption distribution on sound fields in cubic enclosures with diffusely reflecting boundaries. Acoustical radiosity is then compared to predictions made in the four enclosures by a ray-tracing model that can account for diffuse reflection. Comparisons are made of echograms, room-acoustical parameters, and discretized echograms.

  12. Broadband Fan Noise Prediction System for Turbofan Engines. Volume 3; Validation and Test Cases

    NASA Technical Reports Server (NTRS)

    Morin, Bruce L.

    2010-01-01

    Pratt & Whitney has developed a Broadband Fan Noise Prediction System (BFaNS) for turbofan engines. This system computes the noise generated by turbulence impinging on the leading edges of the fan and fan exit guide vane, and noise generated by boundary-layer turbulence passing over the fan trailing edge. BFaNS has been validated on three fan rigs that were tested during the NASA Advanced Subsonic Technology Program (AST). The predicted noise spectra agreed well with measured data. The predicted effects of fan speed, vane count, and vane sweep also agreed well with measurements. The noise prediction system consists of two computer programs: Setup_BFaNS and BFaNS. Setup_BFaNS converts user-specified geometry and flow-field information into a BFaNS input file. From this input file, BFaNS computes the inlet and aft broadband sound power spectra generated by the fan and FEGV. The output file from BFaNS contains the inlet, aft and total sound power spectra from each noise source. This report is the third volume of a three-volume set documenting the Broadband Fan Noise Prediction System: Volume 1: Setup_BFaNS User s Manual and Developer s Guide; Volume 2: BFaNS User s Manual and Developer s Guide; and Volume 3: Validation and Test Cases. The present volume begins with an overview of the Broadband Fan Noise Prediction System, followed by validation studies that were done on three fan rigs. It concludes with recommended improvements and additional studies for BFaNS.

  13. Heterogeneous postsurgical data analytics for predictive modeling of mortality risks in intensive care units.

    PubMed

    Yun Chen; Hui Yang

    2014-01-01

    The rapid advancements of biomedical instrumentation and healthcare technology have resulted in data-rich environments in hospitals. However, the meaningful information extracted from rich datasets is limited. There is a dire need to go beyond current medical practices, and develop data-driven methods and tools that will enable and help (i) the handling of big data, (ii) the extraction of data-driven knowledge, (iii) the exploitation of acquired knowledge for optimizing clinical decisions. This present study focuses on the prediction of mortality rates in Intensive Care Units (ICU) using patient-specific healthcare recordings. It is worth mentioning that postsurgical monitoring in ICU leads to massive datasets with unique properties, e.g., variable heterogeneity, patient heterogeneity, and time asyncronization. To cope with the challenges in ICU datasets, we developed the postsurgical decision support system with a series of analytical tools, including data categorization, data pre-processing, feature extraction, feature selection, and predictive modeling. Experimental results show that the proposed data-driven methodology outperforms traditional approaches and yields better results based on the evaluation of real-world ICU data from 4000 subjects in the database. This research shows great potentials for the use of data-driven analytics to improve the quality of healthcare services.

  14. Predictive validity of the Sødring Motor Evaluation of Stroke Patients (SMES).

    PubMed

    Wyller, T B; Sødring, K M; Sveen, U; Ljunggren, A E; Bautz-Holter, E

    1996-12-01

    The Sødring Motor Evaluation of Stroke Patients (SMES) has been developed as an instrument for the evaluation by physiotherapists of motor function and activities in stroke patients. The predictive validity of the instrument was studied in a consecutive sample of 93 acute stroke patients, assessed in the acute phase and after one year. The outcome measures were: survival, residence at home or in institution, the Barthel ADL index (dichotomized at 19/20), and the Frenchay Activities Index (FAI) (dichotomized at 9/10). The SMES, scored in the acute phase, demonstrated a marginally significant predictive power regarding survival, but was a highly significant predictor regarding the other outcomes. The adjusted odds ratio for a good versus a poor outcome for patients in the upper versus the lower tertile of the SMES arm subscore was 5.4 (95% confidence interval 0.9-59) for survival, 11.5 (2.1-88) for living at home, 86.3 (11-infinity) for a high Barthel score, and 31.4 (5.2-288) for a high FAI score. We conclude that SMES has high predictive validity.

  15. Validation of the thermophysiological model by Fiala for prediction of local skin temperatures

    NASA Astrophysics Data System (ADS)

    Martínez, Natividad; Psikuta, Agnes; Kuklane, Kalev; Quesada, José Ignacio Priego; de Anda, Rosa María Cibrián Ortiz; Soriano, Pedro Pérez; Palmer, Rosario Salvador; Corberán, José Miguel; Rossi, René Michel; Annaheim, Simon

    2016-12-01

    The most complete and realistic physiological data are derived from direct measurements during human experiments; however, they present some limitations such as ethical concerns, time and cost burden. Thermophysiological models are able to predict human thermal response in a wide range of environmental conditions, but their use is limited due to lack of validation. The aim of this work was to validate the thermophysiological model by Fiala for prediction of local skin temperatures against a dedicated database containing 43 different human experiments representing a wide range of conditions. The validation was conducted based on root-mean-square deviation (rmsd) and bias. The thermophysiological model by Fiala showed a good precision when predicting core and mean skin temperature (rmsd 0.26 and 0.92 °C, respectively) and also local skin temperatures for most body sites (average rmsd for local skin temperatures 1.32 °C). However, an increased deviation of the predictions was observed for the forehead skin temperature (rmsd of 1.63 °C) and for the thigh during exercising exposures (rmsd of 1.41 °C). Possible reasons for the observed deviations are lack of information on measurement circumstances (hair, head coverage interference) or an overestimation of the sweat evaporative cooling capacity for the head and thigh, respectively. This work has highlighted the importance of collecting details about the clothing worn and how and where the sensors were attached to the skin for achieving more precise results in the simulations.

  16. Validated Risk Score for Predicting 6-Month Mortality in Infective Endocarditis.

    PubMed

    Park, Lawrence P; Chu, Vivian H; Peterson, Gail; Skoutelis, Athanasios; Lejko-Zupa, Tatjana; Bouza, Emilio; Tattevin, Pierre; Habib, Gilbert; Tan, Ren; Gonzalez, Javier; Altclas, Javier; Edathodu, Jameela; Fortes, Claudio Querido; Siciliano, Rinaldo Focaccia; Pachirat, Orathai; Kanj, Souha; Wang, Andrew

    2016-04-18

    Host factors and complications have been associated with higher mortality in infective endocarditis (IE). We sought to develop and validate a model of clinical characteristics to predict 6-month mortality in IE. Using a large multinational prospective registry of definite IE (International Collaboration on Endocarditis [ICE]-Prospective Cohort Study [PCS], 2000-2006, n=4049), a model to predict 6-month survival was developed by Cox proportional hazards modeling with inverse probability weighting for surgery treatment and was internally validated by the bootstrapping method. This model was externally validated in an independent prospective registry (ICE-PLUS, 2008-2012, n=1197). The 6-month mortality was 971 of 4049 (24.0%) in the ICE-PCS cohort and 342 of 1197 (28.6%) in the ICE-PLUS cohort. Surgery during the index hospitalization was performed in 48.1% and 54.0% of the cohorts, respectively. In the derivation model, variables related to host factors (age, dialysis), IE characteristics (prosthetic or nosocomial IE, causative organism, left-sided valve vegetation), and IE complications (severe heart failure, stroke, paravalvular complication, and persistent bacteremia) were independently associated with 6-month mortality, and surgery was associated with a lower risk of mortality (Harrell's C statistic 0.715). In the validation model, these variables had similar hazard ratios (Harrell's C statistic 0.682), with a similar, independent benefit of surgery (hazard ratio 0.74, 95% CI 0.62-0.89). A simplified risk model was developed by weight adjustment of these variables. Six-month mortality after IE is ≈25% and is predicted by host factors, IE characteristics, and IE complications. Surgery during the index hospitalization is associated with lower mortality but is performed less frequently in the highest risk patients. A simplified risk model may be used to identify specific risk subgroups in IE. © 2016 The Authors. Published on behalf of the American Heart

  17. Job Embeddedness Demonstrates Incremental Validity When Predicting Turnover Intentions for Australian University Employees

    PubMed Central

    Heritage, Brody; Gilbert, Jessica M.; Roberts, Lynne D.

    2016-01-01

    Job embeddedness is a construct that describes the manner in which employees can be enmeshed in their jobs, reducing their turnover intentions. Recent questions regarding the properties of quantitative job embeddedness measures, and their predictive utility, have been raised. Our study compared two competing reflective measures of job embeddedness, examining their convergent, criterion, and incremental validity, as a means of addressing these questions. Cross-sectional quantitative data from 246 Australian university employees (146 academic; 100 professional) was gathered. Our findings indicated that the two compared measures of job embeddedness were convergent when total scale scores were examined. Additionally, job embeddedness was capable of demonstrating criterion and incremental validity, predicting unique variance in turnover intention. However, this finding was not readily apparent with one of the compared job embeddedness measures, which demonstrated comparatively weaker evidence of validity. We discuss the theoretical and applied implications of these findings, noting that job embeddedness has a complementary place among established determinants of turnover intention. PMID:27199817

  18. Bio-analytical method development and validation of Rasagiline by high performance liquid chromatography tandem mass spectrometry detection and its application to pharmacokinetic study

    PubMed Central

    Konda, Ravi Kumar; Chandu, Babu Rao; Challa, B.R.; Kothapalli, Chandrasekhar B.

    2012-01-01

    The most suitable bio-analytical method based on liquid–liquid extraction has been developed and validated for quantification of Rasagiline in human plasma. Rasagiline-13C3 mesylate was used as an internal standard for Rasagiline. Zorbax Eclipse Plus C18 (2.1 mm×50 mm, 3.5 μm) column provided chromatographic separation of analyte followed by detection with mass spectrometry. The method involved simple isocratic chromatographic condition and mass spectrometric detection in the positive ionization mode using an API-4000 system. The total run time was 3.0 min. The proposed method has been validated with the linear range of 5–12000 pg/mL for Rasagiline. The intra-run and inter-run precision values were within 1.3%–2.9% and 1.6%–2.2% respectively for Rasagiline. The overall recovery for Rasagiline and Rasagiline-13C3 mesylate analog was 96.9% and 96.7% respectively. This validated method was successfully applied to the bioequivalence and pharmacokinetic study of human volunteers under fasting condition. PMID:29403764

  19. Validation of BEHAVE fire behavior predictions in oak savannas using five fuel models

    Treesearch

    Keith Grabner; John Dwyer; Bruce Cutter

    1997-01-01

    Prescribed fire is a valuable tool in the restoration and management of oak savannas. BEHAVE, a fire behavior prediction system developed by the United States Forest Service, can be a useful tool when managing oak savannas with prescribed fire. BEHAVE predictions of fire rate-of-spread and flame length were validated using four standardized fuel models: Fuel Model 1 (...

  20. Analytical validation of an ultra low-cost mobile phone microplate reader for infectious disease testing.

    PubMed

    Wang, Li-Ju; Naudé, Nicole; Demissie, Misganaw; Crivaro, Anne; Kamoun, Malek; Wang, Ping; Li, Lei

    2018-07-01

    Most mobile health (mHealth) diagnostic devices for laboratory tests only analyze one sample at a time, which is not suitable for large volume serology testing, especially in low-resource settings with shortage of health professionals. In this study, we developed an ultra-low-cost clinically-accurate mobile phone microplate reader (mReader), and clinically validated this optical device for 12 infectious disease tests. The mReader optically reads 96 samples on a microplate at one time. 771 de-identified patient samples were tested for 12 serology assays for bacterial/viral infections. The mReader and the clinical instrument blindly read and analyzed all tests in parallel. The analytical accuracy and the diagnostic performance of the mReader were evaluated across the clinical reportable categories by comparison with clinical laboratorial testing results. The mReader exhibited 97.59-99.90% analytical accuracy and <5% coefficient of variation (CV). The positive percent agreement (PPA) in all 12 tests achieved 100%, negative percent agreement (NPA) was higher than 83% except for one test (42.86%), and overall percent agreement (OPA) ranged 89.33-100%. We envision the mReader can benefit underserved areas/populations and low-resource settings in rural clinics/hospitals at a low cost (~$50 USD) with clinical-level analytical quality. It has the potential to improve health access, speed up healthcare delivery, and reduce health disparities and education disparities by providing access to a low-cost spectrophotometer. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Validity of a manual soft tissue profile prediction method following mandibular setback osteotomy.

    PubMed

    Kolokitha, Olga-Elpis

    2007-10-01

    The aim of this study was to determine the validity of a manual cephalometric method used for predicting the post-operative soft tissue profiles of patients who underwent mandibular setback surgery and compare it to a computerized cephalometric prediction method (Dentofacial Planner). Lateral cephalograms of 18 adults with mandibular prognathism taken at the end of pre-surgical orthodontics and approximately one year after surgery were used. To test the validity of the manual method the prediction tracings were compared to the actual post-operative tracings. The Dentofacial Planner software was used to develop the computerized post-surgical prediction tracings. Both manual and computerized prediction printouts were analyzed by using the cephalometric system PORDIOS. Statistical analysis was performed by means of t-test. Comparison between manual prediction tracings and the actual post-operative profile showed that the manual method results in more convex soft tissue profiles; the upper lip was found in a more prominent position, upper lip thickness was increased and, the mandible and lower lip were found in a less posterior position than that of the actual profiles. Comparison between computerized and manual prediction methods showed that in the manual method upper lip thickness was increased, the upper lip was found in a more anterior position and the lower anterior facial height was increased as compared to the computerized prediction method. Cephalometric simulation of post-operative soft tissue profile following orthodontic-surgical management of mandibular prognathism imposes certain limitations related to the methods implied. However, both manual and computerized prediction methods remain a useful tool for patient communication.

  2. Towards Adaptive Educational Assessments: Predicting Student Performance using Temporal Stability and Data Analytics in Learning Management Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thakur, Gautam; Olama, Mohammed M; McNair, Wade

    Data-driven assessments and adaptive feedback are becoming a cornerstone research in educational data analytics and involve developing methods for exploring the unique types of data that come from the educational context. For example, predicting college student performance is crucial for both the students and educational institutions. It can support timely intervention to prevent students from failing a course, increasing efficacy of advising functions, and improving course completion rate. In this paper, we present our efforts in using data analytics that enable educationists to design novel data-driven assessment and feedback mechanisms. In order to achieve this objective, we investigate temporal stabilitymore » of students grades and perform predictive analytics on academic data collected from 2009 through 2013 in one of the most commonly used learning management systems, called Moodle. First, we have identified the data features useful for assessments and predicting student outcomes such as students scores in homework assignments, quizzes, exams, in addition to their activities in discussion forums and their total Grade Point Average(GPA) at the same term they enrolled in the course. Second, time series models in both frequency and time domains are applied to characterize the progression as well as overall projections of the grades. In particular, the model analyzed the stability as well as fluctuation of grades among students during the collegiate years (from freshman to senior) and disciplines. Third, Logistic Regression and Neural Network predictive models are used to identify students as early as possible who are in danger of failing the course they are currently enrolled in. These models compute the likelihood of any given student failing (or passing) the current course. The time series analysis indicates that assessments and continuous feedback are critical for freshman and sophomores (even with easy courses) than for seniors, and those assessments may

  3. The Predictive Validity of CBM Writing Indices for Eighth-Grade Students

    ERIC Educational Resources Information Center

    Amato, Janelle M.; Watkins, Marley W.

    2011-01-01

    Curriculum-based measurement (CBM) is an alternative to traditional assessment techniques. Technical work has begun to identify CBM writing indices that are psychometrically sound for monitoring older students' writing proficiency. This study examined the predictive validity of CBM writing indices in a sample of 447 eighth-grade students.…

  4. Concurrent and Predictive Validity of the Phelps Kindergarten Readiness Scale-II

    ERIC Educational Resources Information Center

    Duncan, Jennifer; Rafter, Erin M.

    2005-01-01

    The purpose of this research was to establish the concurrent and predictive validity of the Phelps Kindergarten Readiness Scale, Second Edition (PKRS-II; L. Phelps, 2003). Seventy-four kindergarten students of diverse ethnic backgrounds enrolled in a northeastern suburban school participated in the study. The concurrent administration of the…

  5. Acoustic-Structure Interaction in Rocket Engines: Validation Testing

    NASA Technical Reports Server (NTRS)

    Davis, R. Benjamin; Joji, Scott S.; Parks, Russel A.; Brown, Andrew M.

    2009-01-01

    While analyzing a rocket engine component, it is often necessary to account for any effects that adjacent fluids (e.g., liquid fuels or oxidizers) might have on the structural dynamics of the component. To better characterize the fully coupled fluid-structure system responses, an analytical approach that models the system as a coupled expansion of rigid wall acoustic modes and in vacuo structural modes has been proposed. The present work seeks to experimentally validate this approach. To experimentally observe well-coupled system modes, the test article and fluid cavities are designed such that the uncoupled structural frequencies are comparable to the uncoupled acoustic frequencies. The test measures the natural frequencies, mode shapes, and forced response of cylindrical test articles in contact with fluid-filled cylindrical and/or annular cavities. The test article is excited with a stinger and the fluid-loaded response is acquired using a laser-doppler vibrometer. The experimentally determined fluid-loaded natural frequencies are compared directly to the results of the analytical model. Due to the geometric configuration of the test article, the analytical model is found to be valid for natural modes with circumferential wave numbers greater than four. In the case of these modes, the natural frequencies predicted by the analytical model demonstrate excellent agreement with the experimentally determined natural frequencies.

  6. Temporal and external validation of a prediction model for adverse outcomes among inpatients with diabetes.

    PubMed

    Adderley, N J; Mallett, S; Marshall, T; Ghosh, S; Rayman, G; Bellary, S; Coleman, J; Akiboye, F; Toulis, K A; Nirantharakumar, K

    2018-06-01

    To temporally and externally validate our previously developed prediction model, which used data from University Hospitals Birmingham to identify inpatients with diabetes at high risk of adverse outcome (mortality or excessive length of stay), in order to demonstrate its applicability to other hospital populations within the UK. Temporal validation was performed using data from University Hospitals Birmingham and external validation was performed using data from both the Heart of England NHS Foundation Trust and Ipswich Hospital. All adult inpatients with diabetes were included. Variables included in the model were age, gender, ethnicity, admission type, intensive therapy unit admission, insulin therapy, albumin, sodium, potassium, haemoglobin, C-reactive protein, estimated GFR and neutrophil count. Adverse outcome was defined as excessive length of stay or death. Model discrimination in the temporal and external validation datasets was good. In temporal validation using data from University Hospitals Birmingham, the area under the curve was 0.797 (95% CI 0.785-0.810), sensitivity was 70% (95% CI 67-72) and specificity was 75% (95% CI 74-76). In external validation using data from Heart of England NHS Foundation Trust, the area under the curve was 0.758 (95% CI 0.747-0.768), sensitivity was 73% (95% CI 71-74) and specificity was 66% (95% CI 65-67). In external validation using data from Ipswich, the area under the curve was 0.736 (95% CI 0.711-0.761), sensitivity was 63% (95% CI 59-68) and specificity was 69% (95% CI 67-72). These results were similar to those for the internally validated model derived from University Hospitals Birmingham. The prediction model to identify patients with diabetes at high risk of developing an adverse event while in hospital performed well in temporal and external validation. The externally validated prediction model is a novel tool that can be used to improve care pathways for inpatients with diabetes. Further research to assess

  7. A Process Analytical Technology (PAT) approach to control a new API manufacturing process: development, validation and implementation.

    PubMed

    Schaefer, Cédric; Clicq, David; Lecomte, Clémence; Merschaert, Alain; Norrant, Edith; Fotiadu, Frédéric

    2014-03-01

    Pharmaceutical companies are progressively adopting and introducing Process Analytical Technology (PAT) and Quality-by-Design (QbD) concepts promoted by the regulatory agencies, aiming the building of the quality directly into the product by combining thorough scientific understanding and quality risk management. An analytical method based on near infrared (NIR) spectroscopy was developed as a PAT tool to control on-line an API (active pharmaceutical ingredient) manufacturing crystallization step during which the API and residual solvent contents need to be precisely determined to reach the predefined seeding point. An original methodology based on the QbD principles was designed to conduct the development and validation of the NIR method and to ensure that it is fitted for its intended use. On this basis, Partial least squares (PLS) models were developed and optimized using chemometrics methods. The method was fully validated according to the ICH Q2(R1) guideline and using the accuracy profile approach. The dosing ranges were evaluated to 9.0-12.0% w/w for the API and 0.18-1.50% w/w for the residual methanol. As by nature the variability of the sampling method and the reference method are included in the variability obtained for the NIR method during the validation phase, a real-time process monitoring exercise was performed to prove its fit for purpose. The implementation of this in-process control (IPC) method on the industrial plant from the launch of the new API synthesis process will enable automatic control of the final crystallization step in order to ensure a predefined quality level of the API. In addition, several valuable benefits are expected including reduction of the process time, suppression of a rather difficult sampling and tedious off-line analyses. © 2013 Published by Elsevier B.V.

  8. AI based HealthCare Platform for Real Time, Predictive and Prescriptive Analytics using Reactive Programming

    NASA Astrophysics Data System (ADS)

    Kaur, Jagreet; Singh Mann, Kulwinder, Dr.

    2018-01-01

    AI in Healthcare needed to bring real, actionable insights and Individualized insights in real time for patients and Doctors to support treatment decisions., We need a Patient Centred Platform for integrating EHR Data, Patient Data, Prescriptions, Monitoring, Clinical research and Data. This paper proposes a generic architecture for enabling AI based healthcare analytics Platform by using open sources Technologies Apache beam, Apache Flink Apache Spark, Apache NiFi, Kafka, Tachyon, Gluster FS, NoSQL- Elasticsearch, Cassandra. This paper will show the importance of applying AI based predictive and prescriptive analytics techniques in Health sector. The system will be able to extract useful knowledge that helps in decision making and medical monitoring in real-time through an intelligent process analysis and big data processing.

  9. Fast analytical model of MZI micro-opto-mechanical pressure sensor

    NASA Astrophysics Data System (ADS)

    Rochus, V.; Jansen, R.; Goyvaerts, J.; Neutens, P.; O’Callaghan, J.; Rottenberg, X.

    2018-06-01

    This paper presents a fast analytical procedure in order to design a micro-opto-mechanical pressure sensor (MOMPS) taking into account the mechanical nonlinearity and the optical losses. A realistic model of the photonic MZI is proposed, strongly coupled to a nonlinear mechanical model of the membrane. Based on the membrane dimensions, the residual stress, the position of the waveguide, the optical wavelength and the phase variation due to the opto-mechanical coupling, we derive an analytical model which allows us to predict the response of the total system. The effect of the nonlinearity and the losses on the total performance are carefully studied and measurements on fabricated devices are used to validate the model. Finally, a design procedure is proposed in order to realize fast design of this new type of pressure sensor.

  10. Validation of predictive equations for weight and height using a metric tape.

    PubMed

    Rabito, E I; Mialich, M S; Martínez, E Z; García, R W D; Jordao, A A; Marchini, J S

    2008-01-01

    Weight and height measurements are important data for the evaluation of nutritional status but some situations prevent the execution of these measurements in the standard manner, using special equipment or an estimate by predictive equations. Predictive equations of height and weight requiring only a metric tape as an instrument have been recently developed. To validate three predictive equations for weight and two for height by Rabito and evaluating their agreement with the equations proposed by Chumlea. The following data were collected: sex, age and anthropometric measurements, ie, weight (kg), height (m), subscapular skinfold (mm), calf (cm), arm (cm) and abdominal (cm) circumferences, arm length (cm), and half span (cm). Data were analyzed statistically using the Lin coefficient to test the agreement between the equations and the St. Laurent coefficient to compare the estimated weight and height values with real values. 100 adults (age 48 +/- 18 years) admitted to the University Hospital (HCFMRP/USP) were evaluated. Equations I: W(kg) = 0.5030 (AC) + 0.5634 (AbC) + 1.3180 (CC) +0.0339 (SSSF) - 43.1560 and II: W (kg) = 0.4808 (AC) + 0.5646 (AbC) +1.3160 (CC) - 42.2450 showed the highest coefficients of agreement for weight and equations IV and V showed the highest coefficients of agreement for height. The St. Laurent coefficient indicated that equations III and V were valid for weight and height, respectively. Among the validated equations, the number III W (kg) = 0.5759 (AC) + 0.5263 (AbC) +1.2452 (CC) - 4.8689 (S) - 32.9241 and VH (m) = 63,525 -3,237(S) - 0,06904 (A) + 1,293 (HS) are recommended for height or weight because of their easy use for hospitalized patients and the equations be validated in other situations.

  11. Validation of High Frequency (HF) Propagation Prediction Models in the Arctic region

    NASA Astrophysics Data System (ADS)

    Athieno, R.; Jayachandran, P. T.

    2014-12-01

    Despite the emergence of modern techniques for long distance communication, Ionospheric communication in the high frequency (HF) band (3-30 MHz) remains significant to both civilian and military users. However, the efficient use of the ever-varying ionosphere as a propagation medium is dependent on the reliability of ionospheric and HF propagation prediction models. Most available models are empirical implying that data collection has to be sufficiently large to provide good intended results. The models we present were developed with little data from the high latitudes which necessitates their validation. This paper presents the validation of three long term High Frequency (HF) propagation prediction models over a path within the Arctic region. Measurements of the Maximum Usable Frequency for a 3000 km range (MUF (3000) F2) for Resolute, Canada (74.75° N, 265.00° E), are obtained from hand-scaled ionograms generated by the Canadian Advanced Digital Ionosonde (CADI). The observations have been compared with predictions obtained from the Ionospheric Communication Enhanced Profile Analysis Program (ICEPAC), Voice of America Coverage Analysis Program (VOACAP) and International Telecommunication Union Recommendation 533 (ITU-REC533) for 2009, 2011, 2012 and 2013. A statistical analysis shows that the monthly predictions seem to reproduce the general features of the observations throughout the year though it is more evident in the winter and equinox months. Both predictions and observations show a diurnal and seasonal variation. The analysed models did not show large differences in their performances. However, there are noticeable differences across seasons for the entire period analysed: REC533 gives a better performance in winter months while VOACAP has a better performance for both equinox and summer months. VOACAP gives a better performance in the daily predictions compared to ICEPAC though, in general, the monthly predictions seem to agree more with the

  12. Gradient retention prediction of acid-base analytes in reversed phase liquid chromatography: a simplified approach for acetonitrile-water mobile phases.

    PubMed

    Andrés, Axel; Rosés, Martí; Bosch, Elisabeth

    2014-11-28

    In previous work, a two-parameter model to predict chromatographic retention of ionizable analytes in gradient mode was proposed. However, the procedure required some previous experimental work to get a suitable description of the pKa change with the mobile phase composition. In the present study this previous experimental work has been simplified. The analyte pKa values have been calculated through equations whose coefficients vary depending on their functional group. Forced by this new approach, other simplifications regarding the retention of the totally neutral and totally ionized species also had to be performed. After the simplifications were applied, new prediction values were obtained and compared with the previously acquired experimental data. The simplified model gave pretty good predictions while saving a significant amount of time and resources. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Predictive Validity of a Student Self-Report Screener of Behavioral and Emotional Risk in an Urban High School

    ERIC Educational Resources Information Center

    Dowdy, Erin; Harrell-Williams, Leigh; Dever, Bridget V.; Furlong, Michael J.; Moore, Stephanie; Raines, Tara; Kamphaus, Randy W.

    2016-01-01

    Increasingly, schools are implementing school-based screening for risk of behavioral and emotional problems; hence, foundational evidence supporting the predictive validity of screening instruments is important to assess. This study examined the predictive validity of the Behavior Assessment System for Children-2 Behavioral and Emotional Screening…

  14. A simplified approach to the pooled analysis of calibration of clinical prediction rules for systematic reviews of validation studies

    PubMed Central

    Dimitrov, Borislav D; Motterlini, Nicola; Fahey, Tom

    2015-01-01

    Objective Estimating calibration performance of clinical prediction rules (CPRs) in systematic reviews of validation studies is not possible when predicted values are neither published nor accessible or sufficient or no individual participant or patient data are available. Our aims were to describe a simplified approach for outcomes prediction and calibration assessment and evaluate its functionality and validity. Study design and methods: Methodological study of systematic reviews of validation studies of CPRs: a) ABCD2 rule for prediction of 7 day stroke; and b) CRB-65 rule for prediction of 30 day mortality. Predicted outcomes in a sample validation study were computed by CPR distribution patterns (“derivation model”). As confirmation, a logistic regression model (with derivation study coefficients) was applied to CPR-based dummy variables in the validation study. Meta-analysis of validation studies provided pooled estimates of “predicted:observed” risk ratios (RRs), 95% confidence intervals (CIs), and indexes of heterogeneity (I2) on forest plots (fixed and random effects models), with and without adjustment of intercepts. The above approach was also applied to the CRB-65 rule. Results Our simplified method, applied to ABCD2 rule in three risk strata (low, 0–3; intermediate, 4–5; high, 6–7 points), indicated that predictions are identical to those computed by univariate, CPR-based logistic regression model. Discrimination was good (c-statistics =0.61–0.82), however, calibration in some studies was low. In such cases with miscalibration, the under-prediction (RRs =0.73–0.91, 95% CIs 0.41–1.48) could be further corrected by intercept adjustment to account for incidence differences. An improvement of both heterogeneities and P-values (Hosmer-Lemeshow goodness-of-fit test) was observed. Better calibration and improved pooled RRs (0.90–1.06), with narrower 95% CIs (0.57–1.41) were achieved. Conclusion Our results have an immediate clinical

  15. Analytical solutions of hypersonic type IV shock - shock interactions

    NASA Astrophysics Data System (ADS)

    Frame, Michael John

    An analytical model has been developed to predict the effects of a type IV shock interaction at high Mach numbers. This interaction occurs when an impinging oblique shock wave intersects the most normal portion of a detached bow shock. The flowfield which develops is complicated and contains an embedded jet of supersonic flow, which may be unsteady. The jet impinges on the blunt body surface causing very high pressure and heating loads. Understanding this type of interaction is vital to the designers of cowl lips and leading edges on air- breathing hypersonic vehicles. This analytical model represents the first known attempt at predicting the geometry of the interaction explicitly, without knowing beforehand the jet dimensions, including the length of the transmitted shock where the jet originates. The model uses a hyperbolic equation for the bow shock and by matching mass continuity, flow directions and pressure throughout the flowfield, a prediction of the interaction geometry can be derived. The model has been shown to agree well with the flowfield patterns and properties of experiments and CFD, but the prediction for where the peak pressure is located, and its value, can be significantly in error due to a lack of sophistication in the model of the jet fluid stagnation region. Therefore it is recommended that this region of the flowfield be modeled in more detail and more accurate experimental and CFD measurements be used for validation. However, the analytical model has been shown to be a fast and economic prediction tool, suitable for preliminary design, or for understanding the interactions effects, including the basic physics of the interaction, such as the jet unsteadiness. The model has been used to examine a wide parametric space of possible interactions, including different Mach number, impinging shock strength and location, and cylinder radius. It has also been used to examine the interaction on power-law shaped blunt bodies, a possible candidate for

  16. Developing a Model and Applications for Probabilities of Student Success: A Case Study of Predictive Analytics

    ERIC Educational Resources Information Center

    Calvert, Carol Elaine

    2014-01-01

    This case study relates to distance learning students on open access courses. It demonstrates the use of predictive analytics to generate a model of the probabilities of success and retention at different points, or milestones, in a student journey. A core set of explanatory variables has been established and their varying relative importance at…

  17. Reliability and Validity of the Load-Velocity Relationship to Predict the 1RM Back Squat.

    PubMed

    Banyard, Harry G; Nosaka, Kazunori; Haff, G Gregory

    2017-07-01

    Banyard, HG, Nosaka, K, and Haff, GG. Reliability and validity of the load-velocity relationship to predict the 1RM back squat. J Strength Cond Res 31(7): 1897-1904, 2017-This study investigated the reliability and validity of the load-velocity relationship to predict the free-weight back squat one repetition maximum (1RM). Seventeen strength-trained males performed three 1RM assessments on 3 separate days. All repetitions were performed to full depth with maximal concentric effort. Predicted 1RMs were calculated by entering the mean concentric velocity of the 1RM (V1RM) into an individualized linear regression equation, which was derived from the load-velocity relationship of 3 (20, 40, 60% of 1RM), 4 (20, 40, 60, 80% of 1RM), or 5 (20, 40, 60, 80, 90% of 1RM) incremental warm-up sets. The actual 1RM (140.3 ± 27.2 kg) was very stable between 3 trials (ICC = 0.99; SEM = 2.9 kg; CV = 2.1%; ES = 0.11). Predicted 1RM from 5 warm-up sets up to and including 90% of 1RM was the most reliable (ICC = 0.92; SEM = 8.6 kg; CV = 5.7%; ES = -0.02) and valid (r = 0.93; SEE = 10.6 kg; CV = 7.4%; ES = 0.71) of the predicted 1RM methods. However, all predicted 1RMs were significantly different (p ≤ 0.05; ES = 0.71-1.04) from the actual 1RM. Individual variation for the actual 1RM was small between trials ranging from -5.6 to 4.8% compared with the most accurate predictive method up to 90% of 1RM, which was more variable (-5.5 to 27.8%). Importantly, the V1RM (0.24 ± 0.06 m·s) was unreliable between trials (ICC = 0.42; SEM = 0.05 m·s; CV = 22.5%; ES = 0.14). The load-velocity relationship for the full depth free-weight back squat showed moderate reliability and validity but could not accurately predict 1RM, which was stable between trials. Thus, the load-velocity relationship 1RM prediction method used in this study cannot accurately modify sessional training loads because of large V1RM variability.

  18. Contact-coupled impact of slender rods: analysis and experimental validation

    PubMed Central

    Tibbitts, Ira B.; Kakarla, Deepika; Siskey, Stephanie; Ochoa, Jorge A.; Ong, Kevin L.; Brannon, Rebecca M.

    2013-01-01

    To validate models of contact mechanics in low speed structural impact, slender rods were impacted in a drop tower, and measurements of the contact and vibration were compared to analytical and finite element (FE) models. The contact area was recorded using a novel thin-film transfer technique, and the contact duration was measured using electrical continuity. Strain gages recorded the vibratory strain in one rod, and a laser Doppler vibrometer measured speed. The experiment was modeled analytically on a one-dimensional spatial domain using a quasi-static Hertzian contact law and a system of delay differential equations. The three-dimensional FE model used hexahedral elements, a penalty contact algorithm, and explicit time integration. A small submodel taken from the initial global FE model economically refined the analysis in the small contact region. Measured contact areas were within 6% of both models’ predictions, peak speeds within 2%, cyclic strains within 12 με (RMS value), and contact durations within 2 μs. The global FE model and the measurements revealed small disturbances, not predicted by the analytical model, believed to be caused by interactions of the non-planar stress wavefront with the rod’s ends. The accuracy of the predictions for this simple test, as well as the versatility of the diagnostic tools, validates the theoretical and computational models, corroborates instrument calibration, and establishes confidence that the same methods may be used in experimental and computational study of contact mechanics during impact of more complicated structures. Recommendations are made for applying the methods to a particular biomechanical problem: the edge-loading of a loose prosthetic hip joint which can lead to premature wear and prosthesis failure. PMID:24729630

  19. Assessment of analytical techniques for predicting solid propellant exhaust plumes and plume impingement environments

    NASA Technical Reports Server (NTRS)

    Tevepaugh, J. A.; Smith, S. D.; Penny, M. M.

    1977-01-01

    An analysis of experimental nozzle, exhaust plume, and exhaust plume impingement data is presented. The data were obtained for subscale solid propellant motors with propellant Al loadings of 2, 10 and 15% exhausting to simulated altitudes of 50,000, 100,000 and 112,000 ft. Analytical predictions were made using a fully coupled two-phase method of characteristics numerical solution and a technique for defining thermal and pressure environments experienced by bodies immersed in two-phase exhaust plumes.

  20. External validation of a 5-year survival prediction model after elective abdominal aortic aneurysm repair.

    PubMed

    DeMartino, Randall R; Huang, Ying; Mandrekar, Jay; Goodney, Philip P; Oderich, Gustavo S; Kalra, Manju; Bower, Thomas C; Cronenwett, Jack L; Gloviczki, Peter

    2018-01-01

    The benefit of prophylactic repair of abdominal aortic aneurysms (AAAs) is based on the risk of rupture exceeding the risk of death from other comorbidities. The purpose of this study was to validate a 5-year survival prediction model for patients undergoing elective repair of asymptomatic AAA <6.5 cm to assist in optimal selection of patients. All patients undergoing elective repair for asymptomatic AAA <6.5 cm (open or endovascular) from 2002 to 2011 were identified from a single institutional database (validation group). We assessed the ability of a prior published Vascular Study Group of New England (VSGNE) model (derivation group) to predict survival in our cohort. The model was assessed for discrimination (concordance index), calibration (calibration slope and calibration in the large), and goodness of fit (score test). The VSGNE derivation group consisted of 2367 patients (70% endovascular). Major factors associated with survival in the derivation group were age, coronary disease, chronic obstructive pulmonary disease, renal function, and antiplatelet and statin medication use. Our validation group consisted of 1038 patients (59% endovascular). The validation group was slightly older (74 vs 72 years; P < .01) and had a higher proportion of men (76% vs 68%; P < .01). In addition, the derivation group had higher rates of advanced cardiac disease, chronic obstructive pulmonary disease, and baseline creatinine concentration (1.2 vs 1.1 mg/dL; P < .01). Despite slight differences in preoperative patient factors, 5-year survival was similar between validation and derivation groups (75% vs 77%; P = .33). The concordance index of the validation group was identical between derivation and validation groups at 0.659 (95% confidence interval, 0.63-0.69). Our validation calibration in the large value was 1.02 (P = .62, closer to 1 indicating better calibration), calibration slope of 0.84 (95% confidence interval, 0.71-0.97), and score test of P = .57 (>.05

  1. Predictive Validity and Accuracy of Oral Reading Fluency for English Learners

    ERIC Educational Resources Information Center

    Vanderwood, Michael L.; Tung, Catherine Y.; Checca, C. Jason

    2014-01-01

    The predictive validity and accuracy of an oral reading fluency (ORF) measure for a statewide assessment in English language arts was examined for second-grade native English speakers (NESs) and English learners (ELs) with varying levels of English proficiency. In addition to comparing ELs with native English speakers, the impact of English…

  2. Independent external validation of predictive models for urinary dysfunction following external beam radiotherapy of the prostate: Issues in model development and reporting.

    PubMed

    Yahya, Noorazrul; Ebert, Martin A; Bulsara, Max; Kennedy, Angel; Joseph, David J; Denham, James W

    2016-08-01

    Most predictive models are not sufficiently validated for prospective use. We performed independent external validation of published predictive models for urinary dysfunctions following radiotherapy of the prostate. Multivariable models developed to predict atomised and generalised urinary symptoms, both acute and late, were considered for validation using a dataset representing 754 participants from the TROG 03.04-RADAR trial. Endpoints and features were harmonised to match the predictive models. The overall performance, calibration and discrimination were assessed. 14 models from four publications were validated. The discrimination of the predictive models in an independent external validation cohort, measured using the area under the receiver operating characteristic (ROC) curve, ranged from 0.473 to 0.695, generally lower than in internal validation. 4 models had ROC >0.6. Shrinkage was required for all predictive models' coefficients ranging from -0.309 (prediction probability was inverse to observed proportion) to 0.823. Predictive models which include baseline symptoms as a feature produced the highest discrimination. Two models produced a predicted probability of 0 and 1 for all patients. Predictive models vary in performance and transferability illustrating the need for improvements in model development and reporting. Several models showed reasonable potential but efforts should be increased to improve performance. Baseline symptoms should always be considered as potential features for predictive models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. A valid model for predicting responsible nerve roots in lumbar degenerative disease with diagnostic doubt.

    PubMed

    Li, Xiaochuan; Bai, Xuedong; Wu, Yaohong; Ruan, Dike

    2016-03-15

    To construct and validate a model to predict responsible nerve roots in lumbar degenerative disease with diagnostic doubt (DD). From January 2009-January 2013, 163 patients with DD were assigned to the construction (n = 106) or validation sample (n = 57) according to different admission times to hospital. Outcome was assessed according to the Japanese Orthopedic Association (JOA) recovery rate as excellent, good, fair, and poor. The first two results were considered as effective clinical outcome (ECO). Baseline patient and clinical characteristics were considered as secondary variables. A multivariate logistic regression model was used to construct a model with the ECO as a dependent variable and other factors as explanatory variables. The odds ratios (ORs) of each risk factor were adjusted and transformed into a scoring system. Area under the curve (AUC) was calculated and validated in both internal and external samples. Moreover, calibration plot and predictive ability of this scoring system were also tested for further validation. Patients with DD with ECOs in both construction and validation models were around 76 % (76.4 and 75.5 % respectively). more preoperative visual analog pain scale (VAS) score (OR = 1.56, p < 0.01), stenosis levels of L4/5 or L5/S1 (OR = 1.44, p = 0.04), stenosis locations with neuroforamen (OR = 1.95, p = 0.01), neurological deficit (OR = 1.62, p = 0.01), and more VAS improvement of selective nerve route block (SNRB) (OR = 3.42, p = 0.02). the internal area under the curve (AUC) was 0.85, and the external AUC was 0.72, with a good calibration plot of prediction accuracy. Besides, the predictive ability of ECOs was not different from the actual results (p = 0.532). We have constructed and validated a predictive model for confirming responsible nerve roots in patients with DD. The associated risk factors were preoperative VAS score, stenosis levels of L4/5 or L5/S1, stenosis locations

  4. A Python Analytical Pipeline to Identify Prohormone Precursors and Predict Prohormone Cleavage Sites

    PubMed Central

    Southey, Bruce R.; Sweedler, Jonathan V.; Rodriguez-Zas, Sandra L.

    2008-01-01

    Neuropeptides and hormones are signaling molecules that support cell–cell communication in the central nervous system. Experimentally characterizing neuropeptides requires significant efforts because of the complex and variable processing of prohormone precursor proteins into neuropeptides and hormones. We demonstrate the power and flexibility of the Python language to develop components of an bioinformatic analytical pipeline to identify precursors from genomic data and to predict cleavage as these precursors are en route to the final bioactive peptides. We identified 75 precursors in the rhesus genome, predicted cleavage sites using support vector machines and compared the rhesus predictions to putative assignments based on homology to human sequences. The correct classification rate of cleavage using the support vector machines was over 97% for both human and rhesus data sets. The functionality of Python has been important to develop and maintain NeuroPred (http://neuroproteomics.scs.uiuc.edu/neuropred.html), a user-centered web application for the neuroscience community that provides cleavage site prediction from a wide range of models, precision and accuracy statistics, post-translational modifications, and the molecular mass of potential peptides. The combined results illustrate the suitability of the Python language to implement an all-inclusive bioinformatics approach to predict neuropeptides that encompasses a large number of interdependent steps, from scanning genomes for precursor genes to identification of potential bioactive neuropeptides. PMID:19169350

  5. Validity of a Manual Soft Tissue Profile Prediction Method Following Mandibular Setback Osteotomy

    PubMed Central

    Kolokitha, Olga-Elpis

    2007-01-01

    Objectives The aim of this study was to determine the validity of a manual cephalometric method used for predicting the post-operative soft tissue profiles of patients who underwent mandibular setback surgery and compare it to a computerized cephalometric prediction method (Dentofacial Planner). Lateral cephalograms of 18 adults with mandibular prognathism taken at the end of pre-surgical orthodontics and approximately one year after surgery were used. Methods To test the validity of the manual method the prediction tracings were compared to the actual post-operative tracings. The Dentofacial Planner software was used to develop the computerized post-surgical prediction tracings. Both manual and computerized prediction printouts were analyzed by using the cephalometric system PORDIOS. Statistical analysis was performed by means of t-test. Results Comparison between manual prediction tracings and the actual post-operative profile showed that the manual method results in more convex soft tissue profiles; the upper lip was found in a more prominent position, upper lip thickness was increased and, the mandible and lower lip were found in a less posterior position than that of the actual profiles. Comparison between computerized and manual prediction methods showed that in the manual method upper lip thickness was increased, the upper lip was found in a more anterior position and the lower anterior facial height was increased as compared to the computerized prediction method. Conclusions Cephalometric simulation of post-operative soft tissue profile following orthodontic-surgical management of mandibular prognathism imposes certain limitations related to the methods implied. However, both manual and computerized prediction methods remain a useful tool for patient communication. PMID:19212468

  6. Valid analytical performance specifications for combined analytical bias and imprecision for the use of common reference intervals.

    PubMed

    Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György

    2018-01-01

    Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.

  7. Validity of the SAT® for Predicting First-Year Grades: 2009 SAT Validity Sample. Statistical Report No. 2012-2

    ERIC Educational Resources Information Center

    Patterson, Brian F.; Mattern, Krista D.

    2009-01-01

    In an effort to continuously monitor the validity of the SAT for predicting first-year college grades, the College Board has continued its multi-year effort to recruit four-year colleges and universities (henceforth, "institutions") to provide data on the cohorts of first-time, first-year students entering in the fall semester beginning…

  8. Recent α decay half-lives and analytic expression predictions including superheavy nuclei

    NASA Astrophysics Data System (ADS)

    Royer, G.; Zhang, H. F.

    2008-03-01

    New recent experimental α decay half-lives have been compared with the results obtained from previously proposed formulas depending only on the mass and charge numbers of the α emitter and the Qα value. For the heaviest nuclei they are also compared with calculations using the Density-Dependent M3Y (DDM3Y) effective interaction and the Viola-Seaborg-Sobiczewski (VSS) formulas. The correct agreement allows us to make predictions for the α decay half-lives of other still unknown superheavy nuclei from these analytic formulas using the extrapolated Qα of G. Audi, A. H. Wapstra, and C. Thibault [Nucl. Phys. A729, 337 (2003)].

  9. Recent {alpha} decay half-lives and analytic expression predictions including superheavy nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Royer, G.; Zhang, H. F.

    New recent experimental {alpha} decay half-lives have been compared with the results obtained from previously proposed formulas depending only on the mass and charge numbers of the {alpha} emitter and the Q{sub {alpha}} value. For the heaviest nuclei they are also compared with calculations using the Density-Dependent M3Y (DDM3Y) effective interaction and the Viola-Seaborg-Sobiczewski (VSS) formulas. The correct agreement allows us to make predictions for the {alpha} decay half-lives of other still unknown superheavy nuclei from these analytic formulas using the extrapolated Q{sub {alpha}} of G. Audi, A. H. Wapstra, and C. Thibault [Nucl. Phys. A729, 337 (2003)].

  10. Evaluation of analytical procedures for prediction of turbulent boundary layers on a porous wall

    NASA Technical Reports Server (NTRS)

    Towne, C. E.

    1974-01-01

    An analytical study has been made to determine how well current boundary layer prediction techniques work when there is mass transfer normal to the wall. The data that were considered in this investigation were for two-dimensional, incompressible, turbulent boundary layers with suction and blowing. Some of the bleed data were taken in an adverse pressure gradient. An integral prediction method was used three different porous wall skin friction relations, in addition to a solid-surface relation for the suction cases. A numerical prediction method was also used. Comparisons were made between theoretical and experimental skin friction coefficients, displacement and momentum thicknesses, and velocity profiles. The integral method with one of the porous wall skin friction laws gave very good agreement with data for most of the cases considered. The use of the solid-surface skin friction law caused the integral to overpredict the effectiveness of the bleed. The numerical techniques also worked well for most of the cases.

  11. External validation of the NUn score for predicting anastomotic leakage after oesophageal resection.

    PubMed

    Paireder, Matthias; Jomrich, Gerd; Asari, Reza; Kristo, Ivan; Gleiss, Andreas; Preusser, Matthias; Schoppmann, Sebastian F

    2017-08-29

    Early detection of anastomotic leakage (AL) after oesophageal resection for malignancy is crucial. This retrospective study validates a risk score, predicting AL, which includes C-reactive protein, albumin and white cell count in patients undergoing oesophageal resection between 2003 and 2014. For validation of the NUn score a receiver operating characteristic (ROC) curve is estimated. Area under the ROC curve (AUC) is reported with 95% confidence interval (CI). Among 258 patients (79.5% male) 32 patients showed signs of anastomotic leakage (12.4%). NUn score in our data has a median of 9.3 (range 6.2-17.6). The odds ratio for AL was 1.31 (CI 1.03-1.67; p = 0.028). AUC for AL was 0.59 (CI 0.47-0.72). Using the original cutoff value of 10, the sensitivity was 45.2% an the specificity was 73.8%. This results in a positive predictive value of 19.4% and a negative predictive value of 90.6%. The proportion of variation in AL occurrence, which is explained by the NUn score, was 2.5% (PEV = 0.025). This study provides evidence for an external validation of a simple risk score for AL after oesophageal resection. In this cohort, the NUn score is not useful due to its poor discrimination.

  12. Predictive Validity of National Basketball Association Draft Combine on Future Performance.

    PubMed

    Teramoto, Masaru; Cross, Chad L; Rieger, Randall H; Maak, Travis G; Willick, Stuart E

    2018-02-01

    Teramoto, M, Cross, CL, Rieger, RH, Maak, TG, and Willick, SE. Predictive validity of national basketball association draft combine on future performance. J Strength Cond Res 32(2): 396-408, 2018-The National Basketball Association (NBA) Draft Combine is an annual event where prospective players are evaluated in terms of their athletic abilities and basketball skills. Data collected at the Combine should help NBA teams select right the players for the upcoming NBA draft; however, its value for predicting future performance of players has not been examined. This study investigated predictive validity of the NBA Draft Combine on future performance of basketball players. We performed a principal component analysis (PCA) on the 2010-2015 Combine data to reduce correlated variables (N = 234), a correlation analysis on the Combine data and future on-court performance to examine relationships (maximum pairwise N = 217), and a robust principal component regression (PCR) analysis to predict first-year and 3-year on-court performance from the Combine measures (N = 148 and 127, respectively). Three components were identified within the Combine data through PCA (= Combine subscales): length-size, power-quickness, and upper-body strength. As per the correlation analysis, the individual Combine items for anthropometrics, including height without shoes, standing reach, weight, wingspan, and hand length, as well as the Combine subscale of length-size, had positive, medium-to-large-sized correlations (r = 0.313-0.545) with defensive performance quantified by Defensive Box Plus/Minus. The robust PCR analysis showed that the Combine subscale of length-size was a predictor most significantly associated with future on-court performance (p ≤ 0.05), including Win Shares, Box Plus/Minus, and Value Over Replacement Player, followed by upper-body strength. In conclusion, the NBA Draft Combine has value for predicting future performance of players.

  13. The link between employee attitudes and employee effectiveness: Data matrix of meta-analytic estimates based on 1161 unique correlations.

    PubMed

    Mackay, Michael M

    2016-09-01

    This article offers a correlation matrix of meta-analytic estimates between various employee job attitudes (i.e., Employee engagement, job satisfaction, job involvement, and organizational commitment) and indicators of employee effectiveness (i.e., Focal performance, contextual performance, turnover intention, and absenteeism). The meta-analytic correlations in the matrix are based on over 1100 individual studies representing over 340,000 employees. Data was collected worldwide via employee self-report surveys. Structural path analyses based on the matrix, and the interpretation of the data, can be found in "Investigating the incremental validity of employee engagement in the prediction of employee effectiveness: a meta-analytic path analysis" (Mackay et al., 2016) [1].

  14. Development and validation of multivariable predictive model for thromboembolic events in lymphoma patients.

    PubMed

    Antic, Darko; Milic, Natasa; Nikolovski, Srdjan; Todorovic, Milena; Bila, Jelena; Djurdjevic, Predrag; Andjelic, Bosko; Djurasinovic, Vladislava; Sretenovic, Aleksandra; Vukovic, Vojin; Jelicic, Jelena; Hayman, Suzanne; Mihaljevic, Biljana

    2016-10-01

    Lymphoma patients are at increased risk of thromboembolic events but thromboprophylaxis in these patients is largely underused. We sought to develop and validate a simple model, based on individual clinical and laboratory patient characteristics that would designate lymphoma patients at risk for thromboembolic event. The study population included 1,820 lymphoma patients who were treated in the Lymphoma Departments at the Clinics of Hematology, Clinical Center of Serbia and Clinical Center Kragujevac. The model was developed using data from a derivation cohort (n = 1,236), and further assessed in the validation cohort (n = 584). Sixty-five patients (5.3%) in the derivation cohort and 34 (5.8%) patients in the validation cohort developed thromboembolic events. The variables independently associated with risk for thromboembolism were: previous venous and/or arterial events, mediastinal involvement, BMI>30 kg/m(2) , reduced mobility, extranodal localization, development of neutropenia and hemoglobin level < 100g/L. Based on the risk model score, the population was divided into the following risk categories: low (score 0-1), intermediate (score 2-3), and high (score >3). For patients classified at risk (intermediate and high-risk scores), the model produced negative predictive value of 98.5%, positive predictive value of 25.1%, sensitivity of 75.4%, and specificity of 87.5%. A high-risk score had positive predictive value of 65.2%. The diagnostic performance measures retained similar values in the validation cohort. Developed prognostic Thrombosis Lymphoma - ThroLy score is more specific for lymphoma patients than any other available score targeting thrombosis in cancer patients. Am. J. Hematol. 91:1014-1019, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  15. Development and external validation of a risk-prediction model to predict 5-year overall survival in advanced larynx cancer.

    PubMed

    Petersen, Japke F; Stuiver, Martijn M; Timmermans, Adriana J; Chen, Amy; Zhang, Hongzhen; O'Neill, James P; Deady, Sandra; Vander Poorten, Vincent; Meulemans, Jeroen; Wennerberg, Johan; Skroder, Carl; Day, Andrew T; Koch, Wayne; van den Brekel, Michiel W M

    2018-05-01

    TNM-classification inadequately estimates patient-specific overall survival (OS). We aimed to improve this by developing a risk-prediction model for patients with advanced larynx cancer. Cohort study. We developed a risk prediction model to estimate the 5-year OS rate based on a cohort of 3,442 patients with T3T4N0N+M0 larynx cancer. The model was internally validated using bootstrapping samples and externally validated on patient data from five external centers (n = 770). The main outcome was performance of the model as tested by discrimination, calibration, and the ability to distinguish risk groups based on tertiles from the derivation dataset. The model performance was compared to a model based on T and N classification only. We included age, gender, T and N classification, and subsite as prognostic variables in the standard model. After external validation, the standard model had a significantly better fit than a model based on T and N classification alone (C statistic, 0.59 vs. 0.55, P < .001). The model was able to distinguish well among three risk groups based on tertiles of the risk score. Adding treatment modality to the model did not decrease the predictive power. As a post hoc analysis, we tested the added value of comorbidity as scored by American Society of Anesthesiologists score in a subsample, which increased the C statistic to 0.68. A risk prediction model for patients with advanced larynx cancer, consisting of readily available clinical variables, gives more accurate estimations of the estimated 5-year survival rate when compared to a model based on T and N classification alone. 2c. Laryngoscope, 128:1140-1145, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  16. External prognostic validations and comparisons of age- and gender-adjusted exercise capacity predictions.

    PubMed

    Kim, Esther S H; Ishwaran, Hemant; Blackstone, Eugene; Lauer, Michael S

    2007-11-06

    The purpose of this study was to externally validate the prognostic value of age- and gender-based nomograms and categorical definitions of impaired exercise capacity (EC). Exercise capacity predicts death, but its use in routine clinical practice is hampered by its close correlation with age and gender. For a median of 5 years, we followed 22,275 patients without known heart disease who underwent symptom-limited stress testing. Models for predicted or impaired EC were identified by literature search. Gender-specific multivariable proportional hazards models were constructed. Four methods were used to assess validity: Akaike Information Criterion (AIC), right-censored c-index in 100 out-of-bootstrap samples, the Nagelkerke Index R2, and calculation of calibration error in 100 bootstrap samples. There were 646 and 430 deaths in 13,098 men and 9,177 women, respectively. Of the 7 models tested in men, a model based on a Veterans Affairs cohort (predicted metabolic equivalents [METs] = 18 - [0.15 x age]) had the highest AIC and R2. In women, a model based on the St. James Take Heart Project (predicted METs = 14.7 - [0.13 x age]) performed best. Categorical definitions of fitness performed less well. Even after accounting for age and gender, there was still an important interaction with age, whereby predicted EC was a weaker predictor in older subjects (p for interaction <0.001 in men and 0.003 in women). Several methods describe EC accounting for age and gender-related differences, but their ability to predict mortality differ. Simple cutoff values fail to fully describe EC's strong predictive value.

  17. Chemotherapy effectiveness and mortality prediction in surgically treated osteosarcoma dogs: A validation study.

    PubMed

    Schmidt, A F; Nielen, M; Withrow, S J; Selmic, L E; Burton, J H; Klungel, O H; Groenwold, R H H; Kirpensteijn, J

    2016-03-01

    Canine osteosarcoma is the most common bone cancer, and an important cause of mortality and morbidity, in large purebred dogs. Previously we constructed two multivariable models to predict a dog's 5-month or 1-year mortality risk after surgical treatment for osteosarcoma. According to the 5-month model, dogs with a relatively low risk of 5-month mortality benefited most from additional chemotherapy treatment. In the present study, we externally validated these results using an independent cohort study of 794 dogs. External performance of our prediction models showed some disagreement between observed and predicted risk, mean difference: -0.11 (95% confidence interval [95% CI]-0.29; 0.08) for 5-month risk and 0.25 (95%CI 0.10; 0.40) for 1-year mortality risk. After updating the intercept, agreement improved: -0.0004 (95%CI-0.16; 0.16) and -0.002 (95%CI-0.15; 0.15). The chemotherapy by predicted mortality risk interaction (P-value=0.01) showed that the chemotherapy compared to no chemotherapy effectiveness was modified by 5-month mortality risk: dogs with a relatively lower risk of mortality benefited most from additional chemotherapy. Chemotherapy effectiveness on 1-year mortality was not significantly modified by predicted risk (P-value=0.28). In conclusion, this external validation study confirmed that our multivariable risk prediction models can predict a patient's mortality risk and that dogs with a relatively lower risk of 5-month mortality seem to benefit most from chemotherapy. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Development and validation of an ICD-10-based disability predictive index for patients admitted to hospitals with trauma.

    PubMed

    Wada, Tomoki; Yasunaga, Hideo; Yamana, Hayato; Matsui, Hiroki; Fushimi, Kiyohide; Morimura, Naoto

    2018-03-01

    There was no established disability predictive measurement for patients with trauma that could be used in administrative claims databases. The aim of the present study was to develop and validate a diagnosis-based disability predictive index for severe physical disability at discharge using the International Classification of Diseases, 10th revision (ICD-10) coding. This retrospective observational study used the Diagnosis Procedure Combination database in Japan. Patients who were admitted to hospitals with trauma and discharged alive from 01 April 2010 to 31 March 2015 were included. Pediatric patients under 15 years old were excluded. Data for patients admitted to hospitals from 01 April 2010 to 31 March 2013 was used for development of a disability predictive index (derivation cohort), while data for patients admitted to hospitals from 01 April 2013 to 31 March 2015 was used for the internal validation (validation cohort). The outcome of interest was severe physical disability defined as the Barthel Index score of <60 at discharge. Trauma-related ICD-10 codes were categorized into 36 injury groups with reference to the categorization used in the Global Burden of Diseases study 2013. A multivariable logistic regression analysis was performed for the outcome using the injury groups and patient baseline characteristics including patient age, sex, and Charlson Comorbidity Index (CCI) score in the derivation cohort. A score corresponding to a regression coefficient was assigned to each injury group. The disability predictive index for each patient was defined as the sum of the scores. The predictive performance of the index was validated using the receiver operating characteristic curve analysis in the validation cohort. The derivation cohort included 1,475,158 patients, while the validation cohort included 939,659 patients. Of the 939,659 patients, 235,382 (25.0%) were discharged with severe physical disability. The c-statistics of the disability predictive index

  19. Recidivism in female offenders: PCL-R lifestyle factor and VRAG show predictive validity in a German sample.

    PubMed

    Eisenbarth, Hedwig; Osterheider, Michael; Nedopil, Norbert; Stadtland, Cornelis

    2012-01-01

    A clear and structured approach to evidence-based and gender-specific risk assessment of violence in female offenders is high on political and mental health agendas. However, most data on the factors involved in risk-assessment instruments are based on data of male offenders. The aim of the present study was to validate the use of the Psychopathy Checklist Revised (PCL-R), the HCR-20 and the Violence Risk Appraisal Guide (VRAG) for the prediction of recidivism in German female offenders. This study is part of the Munich Prognosis Project (MPP). It focuses on a subsample of female delinquents (n = 80) who had been referred for forensic-psychiatric evaluation prior to sentencing. The mean time at risk was 8 years (SD = 5 years; range: 1-18 years). During this time, 31% (n = 25) of the female offenders were reconvicted, 5% (n = 4) for violent and 26% (n = 21) for non-violent re-offenses. The predictive validity of the PCL-R for general recidivism was calculated. Analysis with receiver-operating characteristics revealed that the PCL-R total score, the PCL-R antisocial lifestyle factor, the PCL-R lifestyle factor and the PCL-R impulsive and irresponsible behavioral style factor had a moderate predictive validity for general recidivism (area under the curve, AUC = 0.66, p = 0.02). The VRAG has also demonstrated predictive validity (AUC = 0.72, p = 0.02), whereas the HCR-20 showed no predictive validity. These results appear to provide the first evidence that the PCL-R total score and the antisocial lifestyle factor are predictive for general female recidivism, as has been shown consistently for male recidivists. The implications of these findings for crime prevention, prognosis in women, and future research are discussed. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Scattering From the Finite-Length, Dielectric Circular Cylinder. Part 2 - On the Validity of an Analytical Solution for Characterizing Backscattering from Tree Trunks at P-Band

    DTIC Science & Technology

    2015-09-01

    accuracy of an analytical solution for characterizing the backscattering responses of circular cylindrical tree trunks located above a dielectric ground...Figures iv 1. Introduction 1 2. Analytical Solution 2 3. Validation with Full-Wave Solution 4 3.1 Untapered Circular Cylindrical Trunk 5 3.2...Linearly Tapered Circular Cylindrical Trunk 13 3.3 Nonlinearly Tapered Circular Cylindrical Trunk 18 4. Conclusions 22 5. References 23 Appendix

  1. Steady-state analytical model of suspended p-type 3C-SiC bridges under consideration of Joule heating

    NASA Astrophysics Data System (ADS)

    Balakrishnan, Vivekananthan; Dinh, Toan; Phan, Hoang-Phuong; Kozeki, Takahiro; Namazu, Takahiro; Viet Dao, Dzung; Nguyen, Nam-Trung

    2017-07-01

    This paper reports an analytical model and its validation for a released microscale heater made of 3C-SiC thin films. A model for the equivalent electrical and thermal parameters was developed for the two-layer multi-segment heat and electric conduction. The model is based on a 1D energy equation, which considers the temperature-dependent resistivity and allows for the prediction of voltage-current and power-current characteristics of the microheater. The steady-state analytical model was validated by experimental characterization. The results, in particular the nonlinearity caused by temperature dependency, are in good agreement. The low power consumption of the order of 0.18 mW at approximately 310 K indicates the potential use of the structure as thermal sensors in portable applications.

  2. Implementation and Initial Validation of the APS English Test [and] The APS English-Writing Test at Golden West College: Evidence for Predictive Validity.

    ERIC Educational Resources Information Center

    Isonio, Steven

    In May 1991, Golden West College (California) conducted a validation study of the English portion of the Assessment and Placement Services for Community Colleges (APS), followed by a predictive validity study in July 1991. The initial study was designed to aid in the implementation of the new test at GWC by comparing data on APS use at other…

  3. Helios: Understanding Solar Evolution Through Text Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Randazzese, Lucien

    This proof-of-concept project focused on developing, testing, and validating a range of bibliometric, text analytic, and machine-learning based methods to explore the evolution of three photovoltaic (PV) technologies: Cadmium Telluride (CdTe), Dye-Sensitized solar cells (DSSC), and Multi-junction solar cells. The analytical approach to the work was inspired by previous work by the same team to measure and predict the scientific prominence of terms and entities within specific research domains. The goal was to create tools that could assist domain-knowledgeable analysts in investigating the history and path of technological developments in general, with a focus on analyzing step-function changes in performance,more » or “breakthroughs,” in particular. The text-analytics platform developed during this project was dubbed Helios. The project relied on computational methods for analyzing large corpora of technical documents. For this project we ingested technical documents from the following sources into Helios: Thomson Scientific Web of Science (papers), the U.S. Patent & Trademark Office (patents), the U.S. Department of Energy (technical documents), the U.S. National Science Foundation (project funding summaries), and a hand curated set of full-text documents from Thomson Scientific and other sources.« less

  4. Validity of one-repetition maximum predictive equations in men with spinal cord injury.

    PubMed

    Ribeiro Neto, F; Guanais, P; Dornelas, E; Coutinho, A C B; Costa, R R G

    2017-10-01

    Cross-sectional study. The study aimed (a) to test the cross-validation of current one-repetition maximum (1RM) predictive equations in men with spinal cord injury (SCI); (b) to compare the current 1RM predictive equations to a newly developed equation based on the 4- to 12-repetition maximum test (4-12RM). SARAH Rehabilitation Hospital Network, Brasilia, Brazil. Forty-five men aged 28.0 years with SCI between C6 and L2 causing complete motor impairment were enrolled in the study. Volunteers were tested, in a random order, in 1RM test or 4-12RM with 2-3 interval days. Multiple regression analysis was used to generate an equation for predicting 1RM. There were no significant differences between 1RM test and the current predictive equations. ICC values were significant and were classified as excellent for all current predictive equations. The predictive equation of Lombardi presented the best Bland-Altman results (0.5 kg and 12.8 kg for mean difference and interval range around the differences, respectively). The two created equation models for 1RM demonstrated the same and a high adjusted R 2 (0.971, P<0.01), but different SEE of measured 1RM (2.88 kg or 5.4% and 2.90 kg or 5.5%). All 1RM predictive equations are accurate to assess individuals with SCI at the bench press exercise. However, the predictive equation of Lombardi presented the best associated cross-validity results. A specific 1RM prediction equation was also elaborated for individuals with SCI. The created equation should be tested in order to verify whether it presents better accuracy than the current ones.

  5. Application of analytical quality by design principles for the determination of alkyl p-toluenesulfonates impurities in Aprepitant by HPLC. Validation using total-error concept.

    PubMed

    Zacharis, Constantinos K; Vastardi, Elli

    2018-02-20

    In the research presented we report the development of a simple and robust liquid chromatographic method for the quantification of two genotoxic alkyl sulphonate impurities (namely methyl p-toluenesulfonate and isopropyl p-toluenesulfonate) in Aprepitant API substances using the Analytical Quality by Design (AQbD) approach. Following the steps of AQbD protocol, the selected critical method attributes (CMAs) were the separation criterions between the critical peak pairs, the analysis time and the peak efficiencies of the analytes. The critical method parameters (CMPs) included the flow rate, the gradient slope and the acetonitrile content at the first step of the gradient elution program. Multivariate experimental designs namely Plackett-Burman and Box-Behnken designs were conducted sequentially for factor screening and optimization of the method parameters. The optimal separation conditions were estimated using the desirability function. The method was fully validated in the range of 10-200% of the target concentration limit of the analytes using the "total error" approach. Accuracy profiles - a graphical decision making tool - were constructed using the results of the validation procedures. The β-expectation tolerance intervals did not exceed the acceptance criteria of±10%, meaning that 95% of future results will be included in the defined bias limits. The relative bias ranged between - 1.3-3.8% for both analytes, while the RSD values for repeatability and intermediate precision were less than 1.9% in all cases. The achieved limit of detection (LOD) and the limit of quantification (LOQ) were adequate for the specific purpose and found to be 0.02% (corresponding to 48μgg -1 in sample) for both methyl and isopropyl p-toluenesulfonate. As proof-of-concept, the validated method was successfully applied in the analysis of several Aprepitant batches indicating that this methodology could be used for routine quality control analyses. Copyright © 2017 Elsevier B

  6. An analytical framework to assist decision makers in the use of forest ecosystem model predictions

    USGS Publications Warehouse

    Larocque, Guy R.; Bhatti, Jagtar S.; Ascough, J.C.; Liu, J.; Luckai, N.; Mailly, D.; Archambault, L.; Gordon, Andrew M.

    2011-01-01

    The predictions from most forest ecosystem models originate from deterministic simulations. However, few evaluation exercises for model outputs are performed by either model developers or users. This issue has important consequences for decision makers using these models to develop natural resource management policies, as they cannot evaluate the extent to which predictions stemming from the simulation of alternative management scenarios may result in significant environmental or economic differences. Various numerical methods, such as sensitivity/uncertainty analyses, or bootstrap methods, may be used to evaluate models and the errors associated with their outputs. However, the application of each of these methods carries unique challenges which decision makers do not necessarily understand; guidance is required when interpreting the output generated from each model. This paper proposes a decision flow chart in the form of an analytical framework to help decision makers apply, in an orderly fashion, different steps involved in examining the model outputs. The analytical framework is discussed with regard to the definition of problems and objectives and includes the following topics: model selection, identification of alternatives, modelling tasks and selecting alternatives for developing policy or implementing management scenarios. Its application is illustrated using an on-going exercise in developing silvicultural guidelines for a forest management enterprise in Ontario, Canada.

  7. Development and validation of a predictive score for perioperative transfusion in patients with hepatocellular carcinoma undergoing liver resection.

    PubMed

    Wang, Hai-Qing; Yang, Jian; Yang, Jia-Yin; Wang, Wen-Tao; Yan, Lu-Nan

    2015-08-01

    Liver resection is a major surgery requiring perioperative blood transfusion. Predicting the need for blood transfusion for patients undergoing liver resection is of great importance. The present study aimed to develop and validate a model for predicting transfusion requirement in HBV-related hepatocellular carcinoma patients undergoing liver resection. A total of 1543 consecutive liver resections were included in the study. Randomly selected sample set of 1080 cases (70% of the study cohort) were used to develop a predictive score for transfusion requirement and the remaining 30% (n=463) was used to validate the score. Based on the preoperative and predictable intraoperative parameters, logistic regression was used to identify risk factors and to create an integer score for the prediction of transfusion requirement. Extrahepatic procedure, major liver resection, hemoglobin level and platelets count were identified as independent predictors for transfusion requirement by logistic regression analysis. A score system integrating these 4 factors was stratified into three groups which could predict the risk of transfusion, with a rate of 11.4%, 24.7% and 57.4% for low, moderate and high risk, respectively. The prediction model appeared accurate with good discriminatory abilities, generating an area under the receiver operating characteristic curve of 0.736 in the development set and 0.709 in the validation set. We have developed and validated an integer-based risk score to predict perioperative transfusion for patients undergoing liver resection in a high-volume surgical center. This score allows identifying patients at a high risk and may alter transfusion practices.

  8. The predictive validity of the BioMedical Admissions Test for pre-clinical examination performance.

    PubMed

    Emery, Joanne L; Bell, John F

    2009-06-01

    Some medical courses in the UK have many more applicants than places and almost all applicants have the highest possible previous and predicted examination grades. The BioMedical Admissions Test (BMAT) was designed to assist in the student selection process specifically for a number of 'traditional' medical courses with clear pre-clinical and clinical phases and a strong focus on science teaching in the early years. It is intended to supplement the information provided by examination results, interviews and personal statements. This paper reports on the predictive validity of the BMAT and its predecessor, the Medical and Veterinary Admissions Test. Results from the earliest 4 years of the test (2000-2003) were matched to the pre-clinical examination results of those accepted onto the medical course at the University of Cambridge. Correlation and logistic regression analyses were performed for each cohort. Section 2 of the test ('Scientific Knowledge') correlated more strongly with examination marks than did Section 1 ('Aptitude and Skills'). It also had a stronger relationship with the probability of achieving the highest examination class. The BMAT and its predecessor demonstrate predictive validity for the pre-clinical years of the medical course at the University of Cambridge. The test identifies important differences in skills and knowledge between candidates, not shown by their previous attainment, which predict their examination performance. It is thus a valid source of additional admissions information for medical courses with a strong scientific emphasis when previous attainment is very high.

  9. Incremental Validity of Biographical Data in the Prediction of En Route Air Traffic Control Specialist Technical Skills

    DTIC Science & Technology

    2012-07-01

    Incremental Validity of Biographical Data in the Prediction of En Route Air Traffic Control Specialist Technical Skills Dana Broach Civil Aerospace...Medical Institute Federal Aviation Administration Oklahoma City, OK 73125 July 2012 Final Report DOT/FAA/AM- 12 /8 Office of Aerospace Medicine...FAA/AM- 12 /8 4. Title and Subtitle 5. Report Date July 2012 Incremental Validity of Biographical Data in the Prediction of En Route Air

  10. Evaluation of the Predictive Validity of Thermography in Identifying Extravasation With Intravenous Chemotherapy Infusions.

    PubMed

    Matsui, Yuko; Murayama, Ryoko; Tanabe, Hidenori; Oe, Makoto; Motoo, Yoshiharu; Wagatsuma, Takanori; Michibuchi, Michiko; Kinoshita, Sachiko; Sakai, Keiko; Konya, Chizuko; Sugama, Junko; Sanada, Hiromi

    Early detection of extravasation is important, but conventional methods of detection lack objectivity and reliability. This study evaluated the predictive validity of thermography for identifying extravasation during intravenous antineoplastic therapy. Of 257 patients who received chemotherapy through peripheral veins, extravasation was identified in 26. Thermography was performed every 15 to 30 minutes during the infusions. Sensitivity, specificity, positive predictive value, and negative predictive value using thermography were 84.6%, 94.8%, 64.7%, and 98.2%, respectively. This study showed that thermography offers an accurate prediction of extravasation.

  11. Evaluation of the Predictive Validity of Thermography in Identifying Extravasation With Intravenous Chemotherapy Infusions

    PubMed Central

    Murayama, Ryoko; Tanabe, Hidenori; Oe, Makoto; Motoo, Yoshiharu; Wagatsuma, Takanori; Michibuchi, Michiko; Kinoshita, Sachiko; Sakai, Keiko; Konya, Chizuko; Sugama, Junko; Sanada, Hiromi

    2017-01-01

    Early detection of extravasation is important, but conventional methods of detection lack objectivity and reliability. This study evaluated the predictive validity of thermography for identifying extravasation during intravenous antineoplastic therapy. Of 257 patients who received chemotherapy through peripheral veins, extravasation was identified in 26. Thermography was performed every 15 to 30 minutes during the infusions. Sensitivity, specificity, positive predictive value, and negative predictive value using thermography were 84.6%, 94.8%, 64.7%, and 98.2%, respectively. This study showed that thermography offers an accurate prediction of extravasation. PMID:29112585

  12. A Comparative Study of Adolescent Risk Assessment Instruments: Predictive and Incremental Validity

    ERIC Educational Resources Information Center

    Welsh, Jennifer L.; Schmidt, Fred; McKinnon, Lauren; Chattha, H. K.; Meyers, Joanna R.

    2008-01-01

    Promising new adolescent risk assessment tools are being incorporated into clinical practice but currently possess limited evidence of predictive validity regarding their individual and/or combined use in risk assessments. The current study compares three structured adolescent risk instruments, Youth Level of Service/Case Management Inventory…

  13. Validation of statistical predictive models meant to select melanoma patients for sentinel lymph node biopsy.

    PubMed

    Sabel, Michael S; Rice, John D; Griffith, Kent A; Lowe, Lori; Wong, Sandra L; Chang, Alfred E; Johnson, Timothy M; Taylor, Jeremy M G

    2012-01-01

    To identify melanoma patients at sufficiently low risk of nodal metastases who could avoid sentinel lymph node biopsy (SLNB), several statistical models have been proposed based upon patient/tumor characteristics, including logistic regression, classification trees, random forests, and support vector machines. We sought to validate recently published models meant to predict sentinel node status. We queried our comprehensive, prospectively collected melanoma database for consecutive melanoma patients undergoing SLNB. Prediction values were estimated based upon four published models, calculating the same reported metrics: negative predictive value (NPV), rate of negative predictions (RNP), and false-negative rate (FNR). Logistic regression performed comparably with our data when considering NPV (89.4 versus 93.6%); however, the model's specificity was not high enough to significantly reduce the rate of biopsies (SLN reduction rate of 2.9%). When applied to our data, the classification tree produced NPV and reduction in biopsy rates that were lower (87.7 versus 94.1 and 29.8 versus 14.3, respectively). Two published models could not be applied to our data due to model complexity and the use of proprietary software. Published models meant to reduce the SLNB rate among patients with melanoma either underperformed when applied to our larger dataset, or could not be validated. Differences in selection criteria and histopathologic interpretation likely resulted in underperformance. Statistical predictive models must be developed in a clinically applicable manner to allow for both validation and ultimately clinical utility.

  14. Validation of Statistical Predictive Models Meant to Select Melanoma Patients for Sentinel Lymph Node Biopsy

    PubMed Central

    Sabel, Michael S.; Rice, John D.; Griffith, Kent A.; Lowe, Lori; Wong, Sandra L.; Chang, Alfred E.; Johnson, Timothy M.; Taylor, Jeremy M.G.

    2013-01-01

    Introduction To identify melanoma patients at sufficiently low risk of nodal metastases who could avoid SLN biopsy (SLNB). Several statistical models have been proposed based upon patient/tumor characteristics, including logistic regression, classification trees, random forests and support vector machines. We sought to validate recently published models meant to predict sentinel node status. Methods We queried our comprehensive, prospectively-collected melanoma database for consecutive melanoma patients undergoing SLNB. Prediction values were estimated based upon 4 published models, calculating the same reported metrics: negative predictive value (NPV), rate of negative predictions (RNP), and false negative rate (FNR). Results Logistic regression performed comparably with our data when considering NPV (89.4% vs. 93.6%); however the model’s specificity was not high enough to significantly reduce the rate of biopsies (SLN reduction rate of 2.9%). When applied to our data, the classification tree produced NPV and reduction in biopsies rates that were lower 87.7% vs. 94.1% and 29.8% vs. 14.3%, respectively. Two published models could not be applied to our data due to model complexity and the use of proprietary software. Conclusions Published models meant to reduce the SLNB rate among patients with melanoma either underperformed when applied to our larger dataset, or could not be validated. Differences in selection criteria and histopathologic interpretation likely resulted in underperformance. Development of statistical predictive models must be created in a clinically applicable manner to allow for both validation and ultimately clinical utility. PMID:21822550

  15. Analytical Plan for Roman Glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strachan, Denis M.; Buck, Edgar C.; Mueller, Karl T.

    Roman glasses that have been in the sea or underground for about 1800 years can serve as the independent “experiment” that is needed for validation of codes and models that are used in performance assessment. Two sets of Roman-era glasses have been obtained for this purpose. One set comes from the sunken vessel the Iulia Felix; the second from recently excavated glasses from a Roman villa in Aquileia, Italy. The specimens contain glass artifacts and attached sediment or soil. In the case of the Iulia Felix glasses quite a lot of analytical work has been completed at the University ofmore » Padova, but from an archaeological perspective. The glasses from Aquileia have not been so carefully analyzed, but they are similar to other Roman glasses. Both glass and sediment or soil need to be analyzed and are the subject of this analytical plan. The glasses need to be analyzed with the goal of validating the model used to describe glass dissolution. The sediment and soil need to be analyzed to determine the profile of elements released from the glass. This latter need represents a significant analytical challenge because of the trace quantities that need to be analyzed. Both pieces of information will yield important information useful in the validation of the glass dissolution model and the chemical transport code(s) used to determine the migration of elements once released from the glass. In this plan, we outline the analytical techniques that should be useful in obtaining the needed information and suggest a useful starting point for this analytical effort.« less

  16. Development and in-line validation of a Process Analytical Technology to facilitate the scale up of coating processes.

    PubMed

    Wirges, M; Funke, A; Serno, P; Knop, K; Kleinebudde, P

    2013-05-05

    Incorporation of an active pharmaceutical ingredient (API) into the coating layer of film-coated tablets is a method mainly used to formulate fixed-dose combinations. Uniform and precise spray-coating of an API represents a substantial challenge, which could be overcome by applying Raman spectroscopy as process analytical tool. In pharmaceutical industry, Raman spectroscopy is still mainly used as a bench top laboratory analytical method and usually not implemented in the production process. Concerning the application in the production process, a lot of scientific approaches stop at the level of feasibility studies and do not manage the step to production scale and process applications. The present work puts the scale up of an active coating process into focus, which is a step of highest importance during the pharmaceutical development. Active coating experiments were performed at lab and production scale. Using partial least squares (PLS), a multivariate model was constructed by correlating in-line measured Raman spectral data with the coated amount of API. By transferring this model, being implemented for a lab scale process, to a production scale process, the robustness of this analytical method and thus its applicability as a Process Analytical Technology (PAT) tool for the correct endpoint determination in pharmaceutical manufacturing could be shown. Finally, this method was validated according to the European Medicine Agency (EMA) guideline with respect to the special requirements of the applied in-line model development strategy. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Perioperative Respiratory Adverse Events in Pediatric Ambulatory Anesthesia: Development and Validation of a Risk Prediction Tool.

    PubMed

    Subramanyam, Rajeev; Yeramaneni, Samrat; Hossain, Mohamed Monir; Anneken, Amy M; Varughese, Anna M

    2016-05-01

    Perioperative respiratory adverse events (PRAEs) are the most common cause of serious adverse events in children receiving anesthesia. Our primary aim of this study was to develop and validate a risk prediction tool for the occurrence of PRAE from the onset of anesthesia induction until discharge from the postanesthesia care unit in children younger than 18 years undergoing elective ambulatory anesthesia for surgery and radiology. The incidence of PRAE was studied. We analyzed data from 19,059 patients from our department's quality improvement database. The predictor variables were age, sex, ASA physical status, morbid obesity, preexisting pulmonary disorder, preexisting neurologic disorder, and location of ambulatory anesthesia (surgery or radiology). Composite PRAE was defined as the presence of any 1 of the following events: intraoperative bronchospasm, intraoperative laryngospasm, postoperative apnea, postoperative laryngospasm, postoperative bronchospasm, or postoperative prolonged oxygen requirement. Development and validation of the risk prediction tool for PRAE were performed using a split sampling technique to split the database into 2 independent cohorts based on the year when the patient received ambulatory anesthesia for surgery and radiology using logistic regression. A risk score was developed based on the regression coefficients from the validation tool. The performance of the risk prediction tool was assessed by using tests of discrimination and calibration. The overall incidence of composite PRAE was 2.8%. The derivation cohort included 8904 patients, and the validation cohort included 10,155 patients. The risk of PRAE was 3.9% in the development cohort and 1.8% in the validation cohort. Age ≤ 3 years (versus >3 years), ASA physical status II or III (versus ASA physical status I), morbid obesity, preexisting pulmonary disorder, and surgery (versus radiology) significantly predicted the occurrence of PRAE in a multivariable logistic regression

  18. Predicting CH4 adsorption capacity of microporous carbon using N2 isotherm and a new analytical model

    USGS Publications Warehouse

    Sun, Jielun; Chen, S.; Rostam-Abadi, M.; Rood, M.J.

    1998-01-01

    A new analytical pore size distribution (PSD) model was developed to predict CH4 adsorption (storage) capacity of microporous adsorbent carbon. The model is based on a 3-D adsorption isotherm equation, derived from statistical mechanical principles. Least squares error minimization is used to solve the PSD without any pre-assumed distribution function. In comparison with several well-accepted analytical methods from the literature, this 3-D model offers relatively realistic PSD description for select reference materials, including activated carbon fibers. N2 and CH4 adsorption data were correlated using the 3-D model for commercial carbons BPL and AX-21. Predicted CH4 adsorption isotherms, based on N2 adsorption at 77 K, were in reasonable agreement with the experimental CH4 isotherms. Modeling results indicate that not all the pores contribute the same percentage Vm/Vs for CH4 storage due to different adsorbed CH4 densities. Pores near 8-9 A?? shows higher Vm/Vs on the equivalent volume basis than does larger pores.

  19. Investigation of the short argon arc with hot anode. II. Analytical model

    NASA Astrophysics Data System (ADS)

    Khrabry, A.; Kaganovich, I. D.; Nemchinsky, V.; Khodak, A.

    2018-01-01

    A short atmospheric pressure argon arc is studied numerically and analytically. In a short arc with an inter-electrode gap of several millimeters, non-equilibrium effects in plasma play an important role in operation of the arc. High anode temperature leads to electron emission and intensive radiation from its surface. A complete, self-consistent analytical model of the whole arc comprising of models for near-electrode regions, arc column, and a model of heat transfer in cylindrical electrodes was developed. The model predicts the width of non-equilibrium layers and arc column, voltages and plasma profiles in these regions, and heat and ion fluxes to the electrodes. Parametric studies of the arc have been performed for a range of the arc current densities, inter-electrode gap widths, and gas pressures. The model was validated against experimental data and verified by comparison with numerical solution. Good agreement between the analytical model and simulations and reasonable agreement with experimental data were obtained.

  20. Investigation of the short argon arc with hot anode. II. Analytical model

    DOE PAGES

    Khrabry, A.; Kaganovich, I. D.; Nemchinsky, V.; ...

    2018-01-22

    A short atmospheric pressure argon arc is studied numerically and analytically. In a short arc with an inter-electrode gap of several millimeters, non-equilibrium effects in plasma play an important role in operation of the arc. High anode temperature leads to electron emission and intensive radiation from its surface. A complete, self-consistent analytical model of the whole arc comprising of models for near-electrode regions, arc column, and a model of heat transfer in cylindrical electrodes was developed. The model predicts the width of non-equilibrium layers and arc column, voltages and plasma profiles in these regions, and heat and ion fluxes tomore » the electrodes. Parametric studies of the arc have been performed for a range of the arc current densities, inter-electrode gap widths, and gas pressures. The model was validated against experimental data and verified by comparison with numerical solution. In conclusion, good agreement between the analytical model and simulations and reasonable agreement with experimental data were obtained.« less

  1. Investigation of the short argon arc with hot anode. II. Analytical model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khrabry, A.; Kaganovich, I. D.; Nemchinsky, V.

    A short atmospheric pressure argon arc is studied numerically and analytically. In a short arc with an inter-electrode gap of several millimeters, non-equilibrium effects in plasma play an important role in operation of the arc. High anode temperature leads to electron emission and intensive radiation from its surface. A complete, self-consistent analytical model of the whole arc comprising of models for near-electrode regions, arc column, and a model of heat transfer in cylindrical electrodes was developed. The model predicts the width of non-equilibrium layers and arc column, voltages and plasma profiles in these regions, and heat and ion fluxes tomore » the electrodes. Parametric studies of the arc have been performed for a range of the arc current densities, inter-electrode gap widths, and gas pressures. The model was validated against experimental data and verified by comparison with numerical solution. In conclusion, good agreement between the analytical model and simulations and reasonable agreement with experimental data were obtained.« less

  2. Predicting survival of de novo metastatic breast cancer in Asian women: systematic review and validation study.

    PubMed

    Miao, Hui; Hartman, Mikael; Bhoo-Pathy, Nirmala; Lee, Soo-Chin; Taib, Nur Aishah; Tan, Ern-Yu; Chan, Patrick; Moons, Karel G M; Wong, Hoong-Seam; Goh, Jeremy; Rahim, Siti Mastura; Yip, Cheng-Har; Verkooijen, Helena M

    2014-01-01

    In Asia, up to 25% of breast cancer patients present with distant metastases at diagnosis. Given the heterogeneous survival probabilities of de novo metastatic breast cancer, individual outcome prediction is challenging. The aim of the study is to identify existing prognostic models for patients with de novo metastatic breast cancer and validate them in Asia. We performed a systematic review to identify prediction models for metastatic breast cancer. Models were validated in 642 women with de novo metastatic breast cancer registered between 2000 and 2010 in the Singapore Malaysia Hospital Based Breast Cancer Registry. Survival curves for low, intermediate and high-risk groups according to each prognostic score were compared by log-rank test and discrimination of the models was assessed by concordance statistic (C-statistic). We identified 16 prediction models, seven of which were for patients with brain metastases only. Performance status, estrogen receptor status, metastatic site(s) and disease-free interval were the most common predictors. We were able to validate nine prediction models. The capacity of the models to discriminate between poor and good survivors varied from poor to fair with C-statistics ranging from 0.50 (95% CI, 0.48-0.53) to 0.63 (95% CI, 0.60-0.66). The discriminatory performance of existing prediction models for de novo metastatic breast cancer in Asia is modest. Development of an Asian-specific prediction model is needed to improve prognostication and guide decision making.

  3. Statistical validation of predictive TRANSP simulations of baseline discharges in preparation for extrapolation to JET D-T

    NASA Astrophysics Data System (ADS)

    Kim, Hyun-Tae; Romanelli, M.; Yuan, X.; Kaye, S.; Sips, A. C. C.; Frassinetti, L.; Buchanan, J.; Contributors, JET

    2017-06-01

    This paper presents for the first time a statistical validation of predictive TRANSP simulations of plasma temperature using two transport models, GLF23 and TGLF, over a database of 80 baseline H-mode discharges in JET-ILW. While the accuracy of the predicted T e with TRANSP-GLF23 is affected by plasma collisionality, the dependency of predictions on collisionality is less significant when using TRANSP-TGLF, indicating that the latter model has a broader applicability across plasma regimes. TRANSP-TGLF also shows a good matching of predicted T i with experimental measurements allowing for a more accurate prediction of the neutron yields. The impact of input data and assumptions prescribed in the simulations are also investigated in this paper. The statistical validation and the assessment of uncertainty level in predictive TRANSP simulations for JET-ILW-DD will constitute the basis for the extrapolation to JET-ILW-DT experiments.

  4. Validity and validation of expert (Q)SAR systems.

    PubMed

    Hulzebos, E; Sijm, D; Traas, T; Posthumus, R; Maslankiewicz, L

    2005-08-01

    At a recent workshop in Setubal (Portugal) principles were drafted to assess the suitability of (quantitative) structure-activity relationships ((Q)SARs) for assessing the hazards and risks of chemicals. In the present study we applied some of the Setubal principles to test the validity of three (Q)SAR expert systems and validate the results. These principles include a mechanistic basis, the availability of a training set and validation. ECOSAR, BIOWIN and DEREK for Windows have a mechanistic or empirical basis. ECOSAR has a training set for each QSAR. For half of the structural fragments the number of chemicals in the training set is >4. Based on structural fragments and log Kow, ECOSAR uses linear regression to predict ecotoxicity. Validating ECOSAR for three 'valid' classes results in predictivity of > or = 64%. BIOWIN uses (non-)linear regressions to predict the probability of biodegradability based on fragments and molecular weight. It has a large training set and predicts non-ready biodegradability well. DEREK for Windows predictions are supported by a mechanistic rationale and literature references. The structural alerts in this program have been developed with a training set of positive and negative toxicity data. However, to support the prediction only a limited number of chemicals in the training set is presented to the user. DEREK for Windows predicts effects by 'if-then' reasoning. The program predicts best for mutagenicity and carcinogenicity. Each structural fragment in ECOSAR and DEREK for Windows needs to be evaluated and validated separately.

  5. External model validation of binary clinical risk prediction models in cardiovascular and thoracic surgery.

    PubMed

    Hickey, Graeme L; Blackstone, Eugene H

    2016-08-01

    Clinical risk-prediction models serve an important role in healthcare. They are used for clinical decision-making and measuring the performance of healthcare providers. To establish confidence in a model, external model validation is imperative. When designing such an external model validation study, thought must be given to patient selection, risk factor and outcome definitions, missing data, and the transparent reporting of the analysis. In addition, there are a number of statistical methods available for external model validation. Execution of a rigorous external validation study rests in proper study design, application of suitable statistical methods, and transparent reporting. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  6. A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules.

    PubMed

    Ramakrishnan, Sridhar; Wesensten, Nancy J; Balkin, Thomas J; Reifman, Jaques

    2016-01-01

    Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss-from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges-and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. © 2016 Associated Professional Sleep Societies, LLC.

  7. A NONSTEADY-STATE ANALYTICAL MODEL TO PREDICT GASEOUS EMISSIONS OF VOLATILE ORGANIC COMPOUNDS FROM LANDFILLS. (R825689C072)

    EPA Science Inventory

    Abstract

    A general mathematical model is developed to predict emissions of volatile organic compounds (VOCs) from hazardous or sanitary landfills. The model is analytical in nature and includes important mechanisms occurring in unsaturated subsurface landfill environme...

  8. Clinical Nomograms to Predict Stone-Free Rates after Shock-Wave Lithotripsy: Development and Internal-Validation

    PubMed Central

    Kim, Jung Kwon; Ha, Seung Beom; Jeon, Chan Hoo; Oh, Jong Jin; Cho, Sung Yong; Oh, Seung-June; Kim, Hyeon Hoe; Jeong, Chang Wook

    2016-01-01

    Purpose Shock-wave lithotripsy (SWL) is accepted as the first line treatment modality for uncomplicated upper urinary tract stones; however, validated prediction models with regards to stone-free rates (SFRs) are still needed. We aimed to develop nomograms predicting SFRs after the first and within the third session of SWL. Computed tomography (CT) information was also modeled for constructing nomograms. Materials and Methods From March 2006 to December 2013, 3028 patients were treated with SWL for ureter and renal stones at our three tertiary institutions. Four cohorts were constructed: Total-development, Total-validation, CT-development, and CT-validation cohorts. The nomograms were developed using multivariate logistic regression models with selected significant variables in a univariate logistic regression model. A C-index was used to assess the discrimination accuracy of nomograms and calibration plots were used to analyze the consistency of prediction. Results The SFR, after the first and within the third session, was 48.3% and 68.8%, respectively. Significant variables were sex, stone location, stone number, and maximal stone diameter in the Total-development cohort, and mean Hounsfield unit (HU) and grade of hydronephrosis (HN) were additional parameters in the CT-development cohort. The C-indices were 0.712 and 0.723 for after the first and within the third session of SWL in the Total-development cohort, and 0.755 and 0.756, in the CT-development cohort, respectively. The calibration plots showed good correspondences. Conclusions We constructed and validated nomograms to predict SFR after SWL. To the best of our knowledge, these are the first graphical nomograms to be modeled with CT information. These may be useful for patient counseling and treatment decision-making. PMID:26890006

  9. Analytical Verifications in Cryogenic Testing of NGST Advanced Mirror System Demonstrators

    NASA Technical Reports Server (NTRS)

    Cummings, Ramona; Levine, Marie; VanBuren, Dave; Kegley, Jeff; Green, Joseph; Hadaway, James; Presson, Joan; Cline, Todd; Stahl, H. Philip (Technical Monitor)

    2002-01-01

    Ground based testing is a critical and costly part of component, assembly, and system verifications of large space telescopes. At such tests, however, with integral teamwork by planners, analysts, and test personnel, segments can be included to validate specific analytical parameters and algorithms at relatively low additional cost. This paper opens with strategy of analytical verification segments added to vacuum cryogenic testing of Advanced Mirror System Demonstrator (AMSD) assemblies. These AMSD assemblies incorporate material and architecture concepts being considered in the Next Generation Space Telescope (NGST) design. The test segments for workmanship testing, cold survivability, and cold operation optical throughput are supplemented by segments for analytical verifications of specific structural, thermal, and optical parameters. Utilizing integrated modeling and separate materials testing, the paper continues with support plan for analyses, data, and observation requirements during the AMSD testing, currently slated for late calendar year 2002 to mid calendar year 2003. The paper includes anomaly resolution as gleaned by authors from similar analytical verification support of a previous large space telescope, then closes with draft of plans for parameter extrapolations, to form a well-verified portion of the integrated modeling being done for NGST performance predictions.

  10. Characterization and validation of an in silico toxicology model to predict the mutagenic potential of drug impurities*

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valerio, Luis G., E-mail: luis.valerio@fda.hhs.gov; Cross, Kevin P.

    Control and minimization of human exposure to potential genotoxic impurities found in drug substances and products is an important part of preclinical safety assessments of new drug products. The FDA's 2008 draft guidance on genotoxic and carcinogenic impurities in drug substances and products allows use of computational quantitative structure–activity relationships (QSAR) to identify structural alerts for known and expected impurities present at levels below qualified thresholds. This study provides the information necessary to establish the practical use of a new in silico toxicology model for predicting Salmonella t. mutagenicity (Ames assay outcome) of drug impurities and other chemicals. We describemore » the model's chemical content and toxicity fingerprint in terms of compound space, molecular and structural toxicophores, and have rigorously tested its predictive power using both cross-validation and external validation experiments, as well as case studies. Consistent with desired regulatory use, the model performs with high sensitivity (81%) and high negative predictivity (81%) based on external validation with 2368 compounds foreign to the model and having known mutagenicity. A database of drug impurities was created from proprietary FDA submissions and the public literature which found significant overlap between the structural features of drug impurities and training set chemicals in the QSAR model. Overall, the model's predictive performance was found to be acceptable for screening drug impurities for Salmonella mutagenicity. -- Highlights: ► We characterize a new in silico model to predict mutagenicity of drug impurities. ► The model predicts Salmonella mutagenicity and will be useful for safety assessment. ► We examine toxicity fingerprints and toxicophores of this Ames assay model. ► We compare these attributes to those found in drug impurities known to FDA/CDER. ► We validate the model and find it has a desired predictive performance.« less

  11. A Longitudinal Study of the Predictive Validity of a Kindergarten Screening Battery.

    ERIC Educational Resources Information Center

    Kilgallon, Mary K.; Mueller, Richard J.

    Test validity was studied in nine subtests of a kindergarten screening battery used to predict reading comprehension for children up to five years after entering kindergarten. The independent variables were kindergarteners' scores on the: (1) Otis-Lennon Mental Ability Test; (2) Bender Visual Motor Gestalt Test; (3) Detroit Tests of Learning…

  12. A prospectively validated nomogram for predicting the risk of chemotherapy-induced febrile neutropenia: a multicenter study.

    PubMed

    Bozcuk, H; Yıldız, M; Artaç, M; Kocer, M; Kaya, Ç; Ulukal, E; Ay, S; Kılıç, M P; Şimşek, E H; Kılıçkaya, P; Uçar, S; Coskun, H S; Savas, B

    2015-06-01

    There is clinical need to predict risk of febrile neutropenia before a specific cycle of chemotherapy in cancer patients. Data on 3882 chemotherapy cycles in 1089 consecutive patients with lung, breast, and colon cancer from four teaching hospitals were used to construct a predictive model for febrile neutropenia. A final nomogram derived from the multivariate predictive model was prospectively confirmed in a second cohort of 960 consecutive cases and 1444 cycles. The following factors were used to construct the nomogram: previous history of febrile neutropenia, pre-cycle lymphocyte count, type of cancer, cycle of current chemotherapy, and patient age. The predictive model had a concordance index of 0.95 (95 % confidence interval (CI) = 0.91-0.99) in the derivation cohort and 0.85 (95 % CI = 0.80-0.91) in the external validation cohort. A threshold of 15 % for the risk of febrile neutropenia in the derivation cohort was associated with a sensitivity of 0.76 and specificity of 0.98. These figures were 1.00 and 0.49 in the validation cohort if a risk threshold of 50 % was chosen. This nomogram is helpful in the prediction of febrile neutropenia after chemotherapy in patients with lung, breast, and colon cancer. Usage of this nomogram may help decrease the morbidity and mortality associated with febrile neutropenia and deserves further validation.

  13. A new test set for validating predictions of protein-ligand interaction.

    PubMed

    Nissink, J Willem M; Murray, Chris; Hartshorn, Mike; Verdonk, Marcel L; Cole, Jason C; Taylor, Robin

    2002-12-01

    We present a large test set of protein-ligand complexes for the purpose of validating algorithms that rely on the prediction of protein-ligand interactions. The set consists of 305 complexes with protonation states assigned by manual inspection. The following checks have been carried out to identify unsuitable entries in this set: (1) assessing the involvement of crystallographically related protein units in ligand binding; (2) identification of bad clashes between protein side chains and ligand; and (3) assessment of structural errors, and/or inconsistency of ligand placement with crystal structure electron density. In addition, the set has been pruned to assure diversity in terms of protein-ligand structures, and subsets are supplied for different protein-structure resolution ranges. A classification of the set by protein type is available. As an illustration, validation results are shown for GOLD and SuperStar. GOLD is a program that performs flexible protein-ligand docking, and SuperStar is used for the prediction of favorable interaction sites in proteins. The new CCDC/Astex test set is freely available to the scientific community (http://www.ccdc.cam.ac.uk). Copyright 2002 Wiley-Liss, Inc.

  14. Predicting risk behaviors: development and validation of a diagnostic scale.

    PubMed

    Witte, K; Cameron, K A; McKeon, J K; Berkowitz, J M

    1996-01-01

    The goal of this study was to develop and validate the Risk Behavior Diagnosis (RBD) Scale for use by health care providers and practitioners interested in promoting healthy behaviors. Theoretically guided by the Extended Parallel Process Model (EPPM; a fear appeal theory), the RBD scale was designed to work in conjunction with an easy-to-use formula to determine which types of health risk messages would be most appropriate for a given individual or audience. Because some health risk messages promote behavior change and others backfire, this type of scale offers guidance to practitioners on how to develop the best persuasive message possible to motivate healthy behaviors. The results of the study demonstrate the RBD scale to have a high degree of content, construct, and predictive validity. Specific examples and practical suggestions are offered to facilitate use of the scale for health practitioners.

  15. Analytic prediction of baryonic effects from the EFT of large scale structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewandowski, Matthew; Perko, Ashley; Senatore, Leonardo, E-mail: mattlew@stanford.edu, E-mail: perko@stanford.edu, E-mail: senatore@stanford.edu

    2015-05-01

    The large scale structures of the universe will likely be the next leading source of cosmological information. It is therefore crucial to understand their behavior. The Effective Field Theory of Large Scale Structures provides a consistent way to perturbatively predict the clustering of dark matter at large distances. The fact that baryons move distances comparable to dark matter allows us to infer that baryons at large distances can be described in a similar formalism: the backreaction of short-distance non-linearities and of star-formation physics at long distances can be encapsulated in an effective stress tensor, characterized by a few parameters. Themore » functional form of baryonic effects can therefore be predicted. In the power spectrum the leading contribution goes as ∝ k{sup 2} P(k), with P(k) being the linear power spectrum and with the numerical prefactor depending on the details of the star-formation physics. We also perform the resummation of the contribution of the long-wavelength displacements, allowing us to consistently predict the effect of the relative motion of baryons and dark matter. We compare our predictions with simulations that contain several implementations of baryonic physics, finding percent agreement up to relatively high wavenumbers such as k ≅ 0.3 hMpc{sup −1} or k ≅ 0.6 hMpc{sup −1}, depending on the order of the calculation. Our results open a novel way to understand baryonic effects analytically, as well as to interface with simulations.« less

  16. Comparison of the predictive validity of diagnosis-based risk adjusters for clinical outcomes.

    PubMed

    Petersen, Laura A; Pietz, Kenneth; Woodard, LeChauncy D; Byrne, Margaret

    2005-01-01

    Many possible methods of risk adjustment exist, but there is a dearth of comparative data on their performance. We compared the predictive validity of 2 widely used methods (Diagnostic Cost Groups [DCGs] and Adjusted Clinical Groups [ACGs]) for 2 clinical outcomes using a large national sample of patients. We studied all patients who used Veterans Health Administration (VA) medical services in fiscal year (FY) 2001 (n = 3,069,168) and assigned both a DCG and an ACG to each. We used logistic regression analyses to compare predictive ability for death or long-term care (LTC) hospitalization for age/gender models, DCG models, and ACG models. We also assessed the effect of adding age to the DCG and ACG models. Patients in the highest DCG categories, indicating higher severity of illness, were more likely to die or to require LTC hospitalization. Surprisingly, the age/gender model predicted death slightly more accurately than the ACG model (c-statistic of 0.710 versus 0.700, respectively). The addition of age to the ACG model improved the c-statistic to 0.768. The highest c-statistic for prediction of death was obtained with a DCG/age model (0.830). The lowest c-statistics were obtained for age/gender models for LTC hospitalization (c-statistic 0.593). The c-statistic for use of ACGs to predict LTC hospitalization was 0.783, and improved to 0.792 with the addition of age. The c-statistics for use of DCGs and DCG/age to predict LTC hospitalization were 0.885 and 0.890, respectively, indicating the best prediction. We found that risk adjusters based upon diagnoses predicted an increased likelihood of death or LTC hospitalization, exhibiting good predictive validity. In this comparative analysis using VA data, DCG models were generally superior to ACG models in predicting clinical outcomes, although ACG model performance was enhanced by the addition of age.

  17. Resolving Contradictions of Predictive Validity of University Matriculation Examinations in Nigeria: A Meta-Analysis Approach

    ERIC Educational Resources Information Center

    Modupe, Ale Veronica; Babafemi, Kolawole Emmanuel

    2015-01-01

    The study examined the various means of solving contradictions of predictive studies of University Matriculation Examination in Nigeria. The study used a sample size of 35 studies on predictive validity of University Matriculation Examination in Nigeria, which was purposively selected to have met the criteria for meta-analysis. Two null hypotheses…

  18. Development and Analytical Validation of an Immunoassay for Quantifying Serum Anti-Pertussis Toxin Antibodies Resulting from Bordetella pertussis Infection ▿

    PubMed Central

    Menzies, Sandra L.; Kadwad, Vijay; Pawloski, Lucia C.; Lin, Tsai-Lien; Baughman, Andrew L.; Martin, Monte; Tondella, Maria Lucia C.; Meade, Bruce D.

    2009-01-01

    Adequately sensitive and specific methods to diagnose pertussis in adolescents and adults are not widely available. Currently, no Food and Drug Administration-approved diagnostic assays are available for the serodiagnosis of Bordetella pertussis. Since concentrations of B. pertussis-specific antibodies tend to be high during the later phases of disease, a simple, rapid, easily transferable serodiagnostic test was developed. This article describes test development, initial evaluation of a prototype kit enzyme-linked immunosorbent assay (ELISA) in an interlaboratory collaborative study, and analytical validation. The data presented here demonstrate that the kit met all prespecified criteria for precision, linearity, and accuracy for samples with anti-pertussis toxin (PT) immunoglobulin G (IgG) antibody concentrations in the range of 50 to 150 ELISA units (EU)/ml, the range believed to be most relevant for serodiagnosis. The assay met the precision and linearity criteria for a wider range, namely, from 50 to 200 EU/ml; however, the accuracy criterion was not met at 200 EU/ml. When the newly adopted World Health Organization International Standard for pertussis antiserum (human) reference reagent was used to evaluate accuracy, the accuracy criteria were met from 50 to 200 international units/ml. In conclusion, the IgG anti-PT ELISA met all assay validation parameters within the range considered most relevant for serodiagnosis. This ELISA was developed and analytically validated as a user-friendly kit that can be used in both qualitative and quantitative formats. The technology for producing the kit is transferable to public health laboratories. PMID:19864485

  19. Validation and uncertainty analysis of a pre-treatment 2D dose prediction model

    NASA Astrophysics Data System (ADS)

    Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank

    2018-02-01

    Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.

  20. Analytical study of the heat loss attenuation by clothing on thermal manikins under radiative heat loads.

    PubMed

    Den Hartog, Emiel A; Havenith, George

    2010-01-01

    For wearers of protective clothing in radiation environments there are no quantitative guidelines available for the effect of a radiative heat load on heat exchange. Under the European Union funded project ThermProtect an analytical effort was defined to address the issue of radiative heat load while wearing protective clothing. As within the ThermProtect project much information has become available from thermal manikin experiments in thermal radiation environments, these sets of experimental data are used to verify the analytical approach. The analytical approach provided a good prediction of the heat loss in the manikin experiments, 95% of the variance was explained by the model. The model has not yet been validated at high radiative heat loads and neglects some physical properties of the radiation emissivity. Still, the analytical approach provides a pragmatic approach and may be useful for practical implementation in protective clothing standards for moderate thermal radiation environments.

  1. Analytical challenges for conducting rapid metabolism characterization for QIVIVE.

    PubMed

    Tolonen, Ari; Pelkonen, Olavi

    2015-06-05

    For quantitative in vitro-in vivo extrapolation (QIVIVE) of metabolism for the purposes of toxicokinetics prediction, a precise and robust analytical technique for identifying and measuring a chemical and its metabolites is an absolute prerequisite. Currently, high-resolution mass spectrometry (HR-MS) is a tool of choice for a majority of organic relatively lipophilic molecules, linked with a LC separation tool and simultaneous UV-detection. However, additional techniques such as gas chromatography, radiometric measurements and NMR, are required to cover the whole spectrum of chemical structures. To accumulate enough reliable and robust data for the validation of QIVIVE, there are some partially opposing needs: Detailed delineation of the in vitro test system to produce a reliable toxicokinetic measure for a studied chemical, and a throughput capacity of the in vitro set-up and the analytical tool as high as possible. We discuss current analytical challenges for the identification and quantification of chemicals and their metabolites, both stable and reactive, focusing especially on LC-MS techniques, but simultaneously attempting to pinpoint factors associated with sample preparation, testing conditions and strengths and weaknesses of a particular technique available for a particular task. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Analytical Studies of Boundary Layer Generated Aircraft Interior Noise

    NASA Technical Reports Server (NTRS)

    Howe, M. S.; Shah, P. L.

    1997-01-01

    An analysis is made of the "interior noise" produced by high, subsonic turbulent flow over a thin elastic plate partitioned into "panels" by straight edges transverse to the mean flow direction. This configuration models a section of an aircraft fuselage that may be regarded as locally flat. The analytical problem can be solved in closed form to represent the acoustic radiation in terms of prescribed turbulent boundary layer pressure fluctuations. Two cases are considered: (i) the production of sound at an isolated panel edge (i.e., in the approximation in which the correlation between sound and vibrations generated at neighboring edges is neglected), and (ii) the sound generated by a periodic arrangement of identical panels. The latter problem is amenable to exact analytical treatment provided the panel edge conditions are the same for all panels. Detailed predictions of the interior noise depend on a knowledge of the turbulent boundary layer wall pressure spectrum, and are given here in terms of an empirical spectrum proposed by Laganelli and Wolfe. It is expected that these analytical representations of the sound generated by simplified models of fluid-structure interactions can used to validate more general numerical schemes.

  3. Validation and Use of a Predictive Modeling Tool: Employing Scientific Findings to Improve Responsible Conduct of Research Education.

    PubMed

    Mulhearn, Tyler J; Watts, Logan L; Todd, E Michelle; Medeiros, Kelsey E; Connelly, Shane; Mumford, Michael D

    2017-01-01

    Although recent evidence suggests ethics education can be effective, the nature of specific training programs, and their effectiveness, varies considerably. Building on a recent path modeling effort, the present study developed and validated a predictive modeling tool for responsible conduct of research education. The predictive modeling tool allows users to enter ratings in relation to a given ethics training program and receive instantaneous evaluative information for course refinement. Validation work suggests the tool's predicted outcomes correlate strongly (r = 0.46) with objective course outcomes. Implications for training program development and refinement are discussed.

  4. Development and validation of a liquid chromatography-tandem mass spectrometry analytical method for the therapeutic drug monitoring of eight novel anticancer drugs.

    PubMed

    Herbrink, M; de Vries, N; Rosing, H; Huitema, A D R; Nuijen, B; Schellens, J H M; Beijnen, J H

    2018-04-01

    To support therapeutic drug monitoring of patients with cancer, a fast and accurate method for simultaneous quantification of the registered anticancer drugs afatinib, axitinib, ceritinib, crizotinib, dabrafenib, enzalutamide, regorafenib and trametinib in human plasma using liquid chromatography tandem mass spectrometry was developed and validated. Human plasma samples were collected from treated patients and stored at -20°C. Analytes and internal standards (stable isotopically labeled analytes) were extracted with acetonitrile. An equal amount of 10 mm NH 4 CO 3 was added to the supernatant to yield the final extract. A 2 μL aliquot of this extract was injected onto a C 18 -column, gradient elution was applied and triple-quadrupole mass spectrometry in positive-ion mode was used for detection. All results were within the acceptance criteria of the latest US Food and Drug Administration guidance and European Medicines Agency guidelines on method validation, except for the carry-over of ceritinib and crizotinib. These were corrected for by the injection order of samples. Additional stability tests were carried out for axitinib and dabrafenib in relation to their reported photostability. In conclusion, the described method to simultaneously quantify the eight selected anticancer drugs in human plasma was successfully validated and applied for therapeutic drug monitoring in cancer patients treated with these drugs. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Prediction of turning stability using receptance coupling

    NASA Astrophysics Data System (ADS)

    Jasiewicz, Marcin; Powałka, Bartosz

    2018-01-01

    This paper presents an issue of machining stability prediction of dynamic "lathe - workpiece" system evaluated using receptance coupling method. Dynamic properties of the lathe components (the spindle and the tailstock) are assumed to be constant and can be determined experimentally based on the results of the impact test. Hence, the variable of the system "machine tool - holder - workpiece" is the machined part, which can be easily modelled analytically. The method of receptance coupling enables a synthesis of experimental (spindle, tailstock) and analytical (machined part) models, so impact testing of the entire system becomes unnecessary. The paper presents methodology of analytical and experimental models synthesis, evaluation of the stability lobes and experimental validation procedure involving both the determination of the dynamic properties of the system and cutting tests. In the summary the experimental verification results would be presented and discussed.

  6. Experimentally valid predictions of muscle force and EMG in models of motor-unit function are most sensitive to neural properties.

    PubMed

    Keenan, Kevin G; Valero-Cuevas, Francisco J

    2007-09-01

    Computational models of motor-unit populations are the objective implementations of the hypothesized mechanisms by which neural and muscle properties give rise to electromyograms (EMGs) and force. However, the variability/uncertainty of the parameters used in these models--and how they affect predictions--confounds assessing these hypothesized mechanisms. We perform a large-scale computational sensitivity analysis on the state-of-the-art computational model of surface EMG, force, and force variability by combining a comprehensive review of published experimental data with Monte Carlo simulations. To exhaustively explore model performance and robustness, we ran numerous iterative simulations each using a random set of values for nine commonly measured motor neuron and muscle parameters. Parameter values were sampled across their reported experimental ranges. Convergence after 439 simulations found that only 3 simulations met our two fitness criteria: approximating the well-established experimental relations for the scaling of EMG amplitude and force variability with mean force. An additional 424 simulations preferentially sampling the neighborhood of those 3 valid simulations converged to reveal 65 additional sets of parameter values for which the model predictions approximate the experimentally known relations. We find the model is not sensitive to muscle properties but very sensitive to several motor neuron properties--especially peak discharge rates and recruitment ranges. Therefore to advance our understanding of EMG and muscle force, it is critical to evaluate the hypothesized neural mechanisms as implemented in today's state-of-the-art models of motor unit function. We discuss experimental and analytical avenues to do so as well as new features that may be added in future implementations of motor-unit models to improve their experimental validity.

  7. Predictive and Incremental Validity of Global and Domain-Based Adolescent Life Satisfaction Reports

    ERIC Educational Resources Information Center

    Haranin, Emily C.; Huebner, E. Scott; Suldo, Shannon M.

    2007-01-01

    Concurrent, predictive, and incremental validity of global and domain-based adolescent life satisfaction reports are examined with respect to internalizing and externalizing behavior problems. The Students' Life Satisfaction Scale (SLSS), Multidimensional Students' Life Satisfaction Scale (MSLSS), and measures of internalizing and externalizing…

  8. Measurement of predictive validity in violence risk assessment studies: a second-order systematic review.

    PubMed

    Singh, Jay P; Desmarais, Sarah L; Van Dorn, Richard A

    2013-01-01

    The objective of the present review was to examine how predictive validity is analyzed and reported in studies of instruments used to assess violence risk. We reviewed 47 predictive validity studies published between 1990 and 2011 of 25 instruments that were included in two recent systematic reviews. Although all studies reported receiver operating characteristic curve analyses and the area under the curve (AUC) performance indicator, this methodology was defined inconsistently and findings often were misinterpreted. In addition, there was between-study variation in benchmarks used to determine whether AUCs were small, moderate, or large in magnitude. Though virtually all of the included instruments were designed to produce categorical estimates of risk - through the use of either actuarial risk bins or structured professional judgments - only a minority of studies calculated performance indicators for these categorical estimates. In addition to AUCs, other performance indicators, such as correlation coefficients, were reported in 60% of studies, but were infrequently defined or interpreted. An investigation of sources of heterogeneity did not reveal significant variation in reporting practices as a function of risk assessment approach (actuarial vs. structured professional judgment), study authorship, geographic location, type of journal (general vs. specialized audience), sample size, or year of publication. Findings suggest a need for standardization of predictive validity reporting to improve comparison across studies and instruments. Copyright © 2013 John Wiley & Sons, Ltd.

  9. Study on Predicting Axial Load Capacity of CFST Columns

    NASA Astrophysics Data System (ADS)

    Ravi Kumar, H.; Muthu, K. U.; Kumar, N. S.

    2017-11-01

    This work presents an analytical study and experimental study on the behaviour and ultimate load carrying capacity of axially compressed self-compacting concrete-filled steel tubular columns. Results of tests conducted by various researchers on 213 samples concrete-filled steel tubular columns are reported and present authors experimental data are reported. Two theoretical equations were derived for the prediction of the ultimate axial load strength of concrete-filled steel tubular columns. The results from prediction were compared with the experimental data. Validation to the experimental results was made.

  10. (Very) Early technology assessment and translation of predictive biomarkers in breast cancer.

    PubMed

    Miquel-Cases, Anna; Schouten, Philip C; Steuten, Lotte M G; Retèl, Valesca P; Linn, Sabine C; van Harten, Wim H

    2017-01-01

    Predictive biomarkers can guide treatment decisions in breast cancer. Many studies are undertaken to discover and translate these biomarkers, yet few biomarkers make it to practice. Before use in clinical decision making, predictive biomarkers need to demonstrate analytical validity, clinical validity and clinical utility. While attaining analytical and clinical validity is relatively straightforward, by following methodological recommendations, the achievement of clinical utility is extremely challenging. It requires demonstrating three associations: the biomarker with the outcome (prognostic association), the effect of treatment independent of the biomarker, and the differential treatment effect between the prognostic and the predictive biomarker (predictive association). In addition, economical, ethical, regulatory, organizational and patient/doctor-related aspects are hampering the translational process. Traditionally, these aspects do not receive much attention until formal approval or reimbursement of a biomarker test (informed by Health Technology Assessment (HTA)) is at stake, at which point the clinical utility and sometimes price of the test can hardly be influenced anymore. When HTA analyses are performed earlier, during biomarker research and development, they may prevent further development of those biomarkers unlikely to ever provide sufficient added value to society, and rather facilitate translation of the promising ones. Early HTA is particularly relevant for the predictive biomarker field, as expensive medicines are under pressure and the need for biomarkers to guide their appropriate use is huge. Closer interaction between clinical researchers and HTA experts throughout the translational research process will ensure that available data and methodologies will be used most efficiently to facilitate biomarker translation. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  11. Analytical Methods of Decoupling the Automotive Engine Torque Roll Axis

    NASA Astrophysics Data System (ADS)

    JEONG, TAESEOK; SINGH, RAJENDRA

    2000-06-01

    This paper analytically examines the multi-dimensional mounting schemes of an automotive engine-gearbox system when excited by oscillating torques. In particular, the issue of torque roll axis decoupling is analyzed in significant detail since it is poorly understood. New dynamic decoupling axioms are presented an d compared with the conventional elastic axis mounting and focalization methods. A linear time-invariant system assumption is made in addition to a proportionally damped system. Only rigid-body modes of the powertrain are considered and the chassis elements are assumed to be rigid. Several simplified physical systems are considered and new closed-form solutions for symmetric and asymmetric engine-mounting systems are developed. These clearly explain the design concepts for the 4-point mounting scheme. Our analytical solutions match with the existing design formulations that are only applicable to symmetric geometries. Spectra for all six rigid-body motions are predicted using the alternate decoupling methods and the closed-form solutions are verified. Also, our method is validated by comparing modal solutions with prior experimental and analytical studies. Parametric design studies are carried out to illustrate the methodology. Chief contributions of this research include the development of new or refined analytical models and closed-form solutions along with improved design strategies for the torque roll axis decoupling.

  12. Predicting Survival of De Novo Metastatic Breast Cancer in Asian Women: Systematic Review and Validation Study

    PubMed Central

    Miao, Hui; Hartman, Mikael; Bhoo-Pathy, Nirmala; Lee, Soo-Chin; Taib, Nur Aishah; Tan, Ern-Yu; Chan, Patrick; Moons, Karel G. M.; Wong, Hoong-Seam; Goh, Jeremy; Rahim, Siti Mastura; Yip, Cheng-Har; Verkooijen, Helena M.

    2014-01-01

    Background In Asia, up to 25% of breast cancer patients present with distant metastases at diagnosis. Given the heterogeneous survival probabilities of de novo metastatic breast cancer, individual outcome prediction is challenging. The aim of the study is to identify existing prognostic models for patients with de novo metastatic breast cancer and validate them in Asia. Materials and Methods We performed a systematic review to identify prediction models for metastatic breast cancer. Models were validated in 642 women with de novo metastatic breast cancer registered between 2000 and 2010 in the Singapore Malaysia Hospital Based Breast Cancer Registry. Survival curves for low, intermediate and high-risk groups according to each prognostic score were compared by log-rank test and discrimination of the models was assessed by concordance statistic (C-statistic). Results We identified 16 prediction models, seven of which were for patients with brain metastases only. Performance status, estrogen receptor status, metastatic site(s) and disease-free interval were the most common predictors. We were able to validate nine prediction models. The capacity of the models to discriminate between poor and good survivors varied from poor to fair with C-statistics ranging from 0.50 (95% CI, 0.48–0.53) to 0.63 (95% CI, 0.60–0.66). Conclusion The discriminatory performance of existing prediction models for de novo metastatic breast cancer in Asia is modest. Development of an Asian-specific prediction model is needed to improve prognostication and guide decision making. PMID:24695692

  13. Validation of the DECAF score to predict hospital mortality in acute exacerbations of COPD

    PubMed Central

    Echevarria, C; Steer, J; Heslop-Marshall, K; Stenton, SC; Hickey, PM; Hughes, R; Wijesinghe, M; Harrison, RN; Steen, N; Simpson, AJ; Gibson, GJ; Bourke, SC

    2016-01-01

    Background Hospitalisation due to acute exacerbations of COPD (AECOPD) is common, and subsequent mortality high. The DECAF score was derived for accurate prediction of mortality and risk stratification to inform patient care. We aimed to validate the DECAF score, internally and externally, and to compare its performance to other predictive tools. Methods The study took place in the two hospitals within the derivation study (internal validation) and in four additional hospitals (external validation) between January 2012 and May 2014. Consecutive admissions were identified by screening admissions and searching coding records. Admission clinical data, including DECAF indices, and mortality were recorded. The prognostic value of DECAF and other scores were assessed by the area under the receiver operator characteristic (AUROC) curve. Results In the internal and external validation cohorts, 880 and 845 patients were recruited. Mean age was 73.1 (SD 10.3) years, 54.3% were female, and mean (SD) FEV1 45.5 (18.3) per cent predicted. Overall mortality was 7.7%. The DECAF AUROC curve for inhospital mortality was 0.83 (95% CI 0.78 to 0.87) in the internal cohort and 0.82 (95% CI 0.77 to 0.87) in the external cohort, and was superior to other prognostic scores for inhospital or 30-day mortality. Conclusions DECAF is a robust predictor of mortality, using indices routinely available on admission. Its generalisability is supported by consistent strong performance; it can identify low-risk patients (DECAF 0–1) potentially suitable for Hospital at Home or early supported discharge services, and high-risk patients (DECAF 3–6) for escalation planning or appropriate early palliation. Trial registration number UKCRN ID 14214. PMID:26769015

  14. Output-Adaptive Tetrahedral Cut-Cell Validation for Sonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    A cut-cell approach to Computational Fluid Dynamics (CFD) that utilizes the median dual of a tetrahedral background grid is described. The discrete adjoint is also calculated, which permits adaptation based on improving the calculation of a specified output (off-body pressure signature) in supersonic inviscid flow. These predicted signatures are compared to wind tunnel measurements on and off the configuration centerline 10 body lengths below the model to validate the method for sonic boom prediction. Accurate mid-field sonic boom pressure signatures are calculated with the Euler equations without the use of hybrid grid or signature propagation methods. Highly-refined, shock-aligned anisotropic grids were produced by this method from coarse isotropic grids created without prior knowledge of shock locations. A heuristic reconstruction limiter provided stable flow and adjoint solution schemes while producing similar signatures to Barth-Jespersen and Venkatakrishnan limiters. The use of cut-cells with an output-based adaptive scheme completely automated this accurate prediction capability after a triangular mesh is generated for the cut surface. This automation drastically reduces the manual intervention required by existing methods.

  15. Lightweight ZERODUR: Validation of Mirror Performance and Mirror Modeling Predictions

    NASA Technical Reports Server (NTRS)

    Hull, Tony; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA's XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2 m diameter, f/1.2988% lightweighted SCHOTT lightweighted ZERODUR(TradeMark) mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR(TradeMark). In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response(dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR(TradeMark) mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS). Summarize the outcome of NASA's XRCF tests and model validations

  16. Lightweight ZERODUR®: Validation of mirror performance and mirror modeling predictions

    NASA Astrophysics Data System (ADS)

    Hull, Anthony B.; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA’s XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2m diameter, f/1.29 88% lightweighted SCHOTT lightweighted ZERODUR® mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR®. In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response (dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR® mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS), and summarize the outcome of NASA’s XRCF tests and model validations.

  17. Evaluating the predictive accuracy and the clinical benefit of a nomogram aimed to predict survival in node-positive prostate cancer patients: External validation on a multi-institutional database.

    PubMed

    Bianchi, Lorenzo; Schiavina, Riccardo; Borghesi, Marco; Bianchi, Federico Mineo; Briganti, Alberto; Carini, Marco; Terrone, Carlo; Mottrie, Alex; Gacci, Mauro; Gontero, Paolo; Imbimbo, Ciro; Marchioro, Giansilvio; Milanese, Giulio; Mirone, Vincenzo; Montorsi, Francesco; Morgia, Giuseppe; Novara, Giacomo; Porreca, Angelo; Volpe, Alessandro; Brunocilla, Eugenio

    2018-04-06

    To assess the predictive accuracy and the clinical value of a recent nomogram predicting cancer-specific mortality-free survival after surgery in pN1 prostate cancer patients through an external validation. We evaluated 518 prostate cancer patients treated with radical prostatectomy and pelvic lymph node dissection with evidence of nodal metastases at final pathology, at 10 tertiary centers. External validation was carried out using regression coefficients of the previously published nomogram. The performance characteristics of the model were assessed by quantifying predictive accuracy, according to the area under the curve in the receiver operating characteristic curve and model calibration. Furthermore, we systematically analyzed the specificity, sensitivity, positive predictive value and negative predictive value for each nomogram-derived probability cut-off. Finally, we implemented decision curve analysis, in order to quantify the nomogram's clinical value in routine practice. External validation showed inferior predictive accuracy as referred to in the internal validation (65.8% vs 83.3%, respectively). The discrimination (area under the curve) of the multivariable model was 66.7% (95% CI 60.1-73.0%) by testing with receiver operating characteristic curve analysis. The calibration plot showed an overestimation throughout the range of predicted cancer-specific mortality-free survival rates probabilities. However, in decision curve analysis, the nomogram's use showed a net benefit when compared with the scenarios of treating all patients or none. In an external setting, the nomogram showed inferior predictive accuracy and suboptimal calibration characteristics as compared to that reported in the original population. However, decision curve analysis showed a clinical net benefit, suggesting a clinical implication to correctly manage pN1 prostate cancer patients after surgery. © 2018 The Japanese Urological Association.

  18. Validated Questionnaire of Maternal Attitude and Knowledge for Predicting Caries Risk in Children: Epidemiological Study in North Jakarta, Indonesia.

    PubMed

    Laksmiastuti, Sri Ratna; Budiardjo, Sarworini Bagio; Sutadi, Heriandi

    2017-06-01

    Predicting caries risk in children can be done by identifying caries risk factors. It is an important measure which contributes to best understanding of the cariogenic profile of the patient. Identification could be done by clinical examination and answering the questionnaire. We arrange the study to verify the questionnaire validation for predicting caries risk in children. The study was conducted on 62 pairs of mothers and their children, aged between 3 and 5 years. The questionnaire consists of 10 questions concerning mothers' attitude and knowledge about oral health. The reliability and validity test is based on Cronbach's alpha and correlation coefficient value. All question are reliable (Cronbach's alpha = 0.873) and valid (Corrected item-total item correlation >0.4). Five questionnaires of mother's attitude about oral health and five questionnaires of mother's knowledge about oral health are reliable and valid for predicting caries risk in children.

  19. Derivation, Validation and Application of a Pragmatic Risk Prediction Index for Benchmarking of Surgical Outcomes.

    PubMed

    Spence, Richard T; Chang, David C; Kaafarani, Haytham M A; Panieri, Eugenio; Anderson, Geoffrey A; Hutter, Matthew M

    2018-02-01

    Despite the existence of multiple validated risk assessment and quality benchmarking tools in surgery, their utility outside of high-income countries is limited. We sought to derive, validate and apply a scoring system that is both (1) feasible, and (2) reliably predicts mortality in a middle-income country (MIC) context. A 5-step methodology was used: (1) development of a de novo surgical outcomes database modeled around the American College of Surgeons' National Surgical Quality Improvement Program (ACS-NSQIP) in South Africa (SA dataset), (2) use of the resultant data to identify all predictors of in-hospital death with more than 90% capture indicating feasibility of collection, (3) use these predictors to derive and validate an integer-based score that reliably predicts in-hospital death in the 2012 ACS-NSQIP, (4) apply the score in the original SA dataset and demonstrate its performance, (5) identify threshold cutoffs of the score to prompt action and drive quality improvement. Following step one-three above, the 13 point Codman's score was derived and validated on 211,737 and 109,079 patients, respectively, and includes: age 65 (1), partially or completely dependent functional status (1), preoperative transfusions ≥4 units (1), emergency operation (2), sepsis or septic shock (2) American Society of Anesthesia score ≥3 (3) and operative procedure (1-3). Application of the score to 373 patients in the SA dataset showed good discrimination and calibration to predict an in-hospital death. A Codman Score of 8 is an optimal cutoff point for defining expected and unexpected deaths. We have designed a novel risk prediction score specific for a MIC context. The Codman Score can prove useful for both (1) preoperative decision-making and (2) benchmarking the quality of surgical care in MIC's.

  20. Empirical Prediction of Aircraft Landing Gear Noise

    NASA Technical Reports Server (NTRS)

    Golub, Robert A. (Technical Monitor); Guo, Yue-Ping

    2005-01-01

    This report documents a semi-empirical/semi-analytical method for landing gear noise prediction. The method is based on scaling laws of the theory of aerodynamic noise generation and correlation of these scaling laws with current available test data. The former gives the method a sound theoretical foundation and the latter quantitatively determines the relations between the parameters of the landing gear assembly and the far field noise, enabling practical predictions of aircraft landing gear noise, both for parametric trends and for absolute noise levels. The prediction model is validated by wind tunnel test data for an isolated Boeing 737 landing gear and by flight data for the Boeing 777 airplane. In both cases, the predictions agree well with data, both in parametric trends and in absolute noise levels.

  1. Pre-analytical and analytical variation of drug determination in segmented hair using ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian

    2014-01-01

    Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. External validation of a prediction model for surgical site infection after thoracolumbar spine surgery in a Western European cohort.

    PubMed

    Janssen, Daniël M C; van Kuijk, Sander M J; d'Aumerie, Boudewijn B; Willems, Paul C

    2018-05-16

    A prediction model for surgical site infection (SSI) after spine surgery was developed in 2014 by Lee et al. This model was developed to compute an individual estimate of the probability of SSI after spine surgery based on the patient's comorbidity profile and invasiveness of surgery. Before any prediction model can be validly implemented in daily medical practice, it should be externally validated to assess how the prediction model performs in patients sampled independently from the derivation cohort. We included 898 consecutive patients who underwent instrumented thoracolumbar spine surgery. To quantify overall performance using Nagelkerke's R 2 statistic, the discriminative ability was quantified as the area under the receiver operating characteristic curve (AUC). We computed the calibration slope of the calibration plot, to judge prediction accuracy. Sixty patients developed an SSI. The overall performance of the prediction model in our population was poor: Nagelkerke's R 2 was 0.01. The AUC was 0.61 (95% confidence interval (CI) 0.54-0.68). The estimated slope of the calibration plot was 0.52. The previously published prediction model showed poor performance in our academic external validation cohort. To predict SSI after instrumented thoracolumbar spine surgery for the present population, a better fitting prediction model should be developed.

  3. Analytical model for vibration prediction of two parallel tunnels in a full-space

    NASA Astrophysics Data System (ADS)

    He, Chao; Zhou, Shunhua; Guo, Peijun; Di, Honggui; Zhang, Xiaohui

    2018-06-01

    This paper presents a three-dimensional analytical model for the prediction of ground vibrations from two parallel tunnels embedded in a full-space. The two tunnels are modelled as cylindrical shells of infinite length, and the surrounding soil is modelled as a full-space with two cylindrical cavities. A virtual interface is introduced to divide the soil into the right layer and the left layer. By transforming the cylindrical waves into the plane waves, the solution of wave propagation in the full-space with two cylindrical cavities is obtained. The transformations from the plane waves to cylindrical waves are then used to satisfy the boundary conditions on the tunnel-soil interfaces. The proposed model provides a highly efficient tool to predict the ground vibration induced by the underground railway, which accounts for the dynamic interaction between neighbouring tunnels. Analysis of the vibration fields produced over a range of frequencies and soil properties is conducted. When the distance between the two tunnels is smaller than three times the tunnel diameter, the interaction between neighbouring tunnels is highly significant, at times in the order of 20 dB. It is necessary to consider the interaction between neighbouring tunnels for the prediction of ground vibrations induced underground railways.

  4. Ruling out coronary artery disease in primary care: development and validation of a simple prediction rule.

    PubMed

    Bösner, Stefan; Haasenritter, Jörg; Becker, Annette; Karatolios, Konstantinos; Vaucher, Paul; Gencer, Baris; Herzig, Lilli; Heinzel-Gutenbrunner, Monika; Schaefer, Juergen R; Abu Hani, Maren; Keller, Heidi; Sönnichsen, Andreas C; Baum, Erika; Donner-Banzhoff, Norbert

    2010-09-07

    Chest pain can be caused by various conditions, with life-threatening cardiac disease being of greatest concern. Prediction scores to rule out coronary artery disease have been developed for use in emergency settings. We developed and validated a simple prediction rule for use in primary care. We conducted a cross-sectional diagnostic study in 74 primary care practices in Germany. Primary care physicians recruited all consecutive patients who presented with chest pain (n = 1249) and recorded symptoms and findings for each patient (derivation cohort). An independent expert panel reviewed follow-up data obtained at six weeks and six months on symptoms, investigations, hospital admissions and medications to determine the presence or absence of coronary artery disease. Adjusted odds ratios of relevant variables were used to develop a prediction rule. We calculated measures of diagnostic accuracy for different cut-off values for the prediction scores using data derived from another prospective primary care study (validation cohort). The prediction rule contained five determinants (age/sex, known vascular disease, patient assumes pain is of cardiac origin, pain is worse during exercise, and pain is not reproducible by palpation), with the score ranging from 0 to 5 points. The area under the curve (receiver operating characteristic curve) was 0.87 (95% confidence interval [CI] 0.83-0.91) for the derivation cohort and 0.90 (95% CI 0.87-0.93) for the validation cohort. The best overall discrimination was with a cut-off value of 3 (positive result 3-5 points; negative result prediction rule for coronary artery disease in primary care proved to be robust in the validation cohort. It can help to rule out coronary artery disease in patients presenting with chest pain in primary care.

  5. Description of a Generalized Analytical Model for the Micro-dosimeter Response

    NASA Technical Reports Server (NTRS)

    Badavi, Francis F.; Stewart-Sloan, Charlotte R.; Xapsos, Michael A.; Shinn, Judy L.; Wilson, John W.; Hunter, Abigail

    2007-01-01

    An analytical prediction capability for space radiation in Low Earth Orbit (LEO), correlated with the Space Transportation System (STS) Shuttle Tissue Equivalent Proportional Counter (TEPC) measurements, is presented. The model takes into consideration the energy loss straggling and chord length distribution of the TEPC detector, and is capable of predicting energy deposition fluctuations in a micro-volume by incoming ions through both direct and indirect ionic events. The charged particle transport calculations correlated with STS 56, 51, 110 and 114 flights are accomplished by utilizing the most recent version (2005) of the Langley Research Center (LaRC) deterministic ionized particle transport code High charge (Z) and Energy TRaNsport WZETRN), which has been extensively validated with laboratory beam measurements and available space flight data. The agreement between the TEPC model prediction (response function) and the TEPC measured differential and integral spectra in lineal energy (y) domain is promising.

  6. Development and validation of a prediction model for functional decline in older medical inpatients.

    PubMed

    Takada, Toshihiko; Fukuma, Shingo; Yamamoto, Yosuke; Tsugihashi, Yukio; Nagano, Hiroyuki; Hayashi, Michio; Miyashita, Jun; Azuma, Teruhisa; Fukuhara, Shunichi

    2018-05-17

    To prevent functional decline in older inpatients, identification of high-risk patients is crucial. The aim of this study was to develop and validate a prediction model to assess the risk of functional decline in older medical inpatients. In this retrospective cohort study, patients ≥65 years admitted acutely to medical wards were included. The healthcare database of 246 acute care hospitals (n = 229,913) was used for derivation, and two acute care hospitals (n = 1767 and 5443, respectively) were used for validation. Data were collected using a national administrative claims and discharge database. Functional decline was defined as a decline of the Katz score at discharge compared with on admission. About 6% of patients in the derivation cohort and 9% and 2% in each validation cohort developed functional decline. A model with 7 items, age, body mass index, living in a nursing home, ambulance use, need for assistance in walking, dementia, and bedsore, was developed. On internal validation, it demonstrated a c-statistic of 0.77 (95% confidence interval (CI) = 0.767-0.771) and good fit on the calibration plot. On external validation, the c-statistics were 0.79 (95% CI = 0.77-0.81) and 0.75 (95% CI = 0.73-0.77) for each cohort, respectively. Calibration plots showed good fit in one cohort and overestimation in the other one. A prediction model for functional decline in older medical inpatients was derived and validated. It is expected that use of the model would lead to early identification of high-risk patients and introducing early intervention. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Project Evaluation: Validation of a Scale and Analysis of Its Predictive Capacity

    ERIC Educational Resources Information Center

    Fernandes Malaquias, Rodrigo; de Oliveira Malaquias, Fernanda Francielle

    2014-01-01

    The objective of this study was to validate a scale for assessment of academic projects. As a complement, we examined its predictive ability by comparing the scores of advised/corrected projects based on the model and the final scores awarded to the work by an examining panel (approximately 10 months after the project design). Results of…

  8. An examination of the predictive validity of the risk matrix 2000 in England and wales.

    PubMed

    Barnett, Georgia D; Wakeling, Helen C; Howard, Philip D

    2010-12-01

    This study examined the predictive validity of an actuarial risk-assessment tool with convicted sexual offenders in England and Wales. A modified version of the RM2000/s scale and the RM2000 v and c scales (Thornton et al., 2003) were examined for accuracy in predicting proven sexual violent, nonsexual violent, and combined sexual and/or nonsexual violent reoffending in a sample of sexual offenders who had either started a community sentence or been released from prison into the community by March 2007. Rates of proven reoffending were examined at 2 years for the majority of the sample (n = 4,946), and 4 years ( n = 578) for those for whom these data were available. The predictive validity of the RM2000 scales was also explored for different subgroups of sexual offenders to assess the robustness of the tool. Both the modified RM2000/s and the complete v and c scales effectively classified offenders into distinct risk categories that differed significantly in rates of proven sexual and/or nonsexual violent reoffending. Survival analyses on the RM2000/s and v scales (N = 9,284) indicated that the higher risk groups offended more quickly and at a higher rate than lower risk groups. The relative predictive validity of the RM2000/s, v, and c, as calculated using Receiver Operating Characteristics (ROC) analyses, were moderate (.68) for RM2000/s and large for both the RM2000/c (.73) and RM2000/v (.80), at the 2-year follow-up. RM2000/s was moderately accurate in predicting relative risk of proven sexual reoffending for a variety of subgroups of sexual offenders.

  9. Assessment of predictive performance in incomplete data by combining internal validation and multiple imputation.

    PubMed

    Wahl, Simone; Boulesteix, Anne-Laure; Zierer, Astrid; Thorand, Barbara; van de Wiel, Mark A

    2016-10-26

    Missing values are a frequent issue in human studies. In many situations, multiple imputation (MI) is an appropriate missing data handling strategy, whereby missing values are imputed multiple times, the analysis is performed in every imputed data set, and the obtained estimates are pooled. If the aim is to estimate (added) predictive performance measures, such as (change in) the area under the receiver-operating characteristic curve (AUC), internal validation strategies become desirable in order to correct for optimism. It is not fully understood how internal validation should be combined with multiple imputation. In a comprehensive simulation study and in a real data set based on blood markers as predictors for mortality, we compare three combination strategies: Val-MI, internal validation followed by MI on the training and test parts separately, MI-Val, MI on the full data set followed by internal validation, and MI(-y)-Val, MI on the full data set omitting the outcome followed by internal validation. Different validation strategies, including bootstrap und cross-validation, different (added) performance measures, and various data characteristics are considered, and the strategies are evaluated with regard to bias and mean squared error of the obtained performance estimates. In addition, we elaborate on the number of resamples and imputations to be used, and adopt a strategy for confidence interval construction to incomplete data. Internal validation is essential in order to avoid optimism, with the bootstrap 0.632+ estimate representing a reliable method to correct for optimism. While estimates obtained by MI-Val are optimistically biased, those obtained by MI(-y)-Val tend to be pessimistic in the presence of a true underlying effect. Val-MI provides largely unbiased estimates, with a slight pessimistic bias with increasing true effect size, number of covariates and decreasing sample size. In Val-MI, accuracy of the estimate is more strongly improved by

  10. An Analytical Model for the Prediction of a Micro-Dosimeter Response Function

    NASA Technical Reports Server (NTRS)

    Badavi, Francis F.; Xapsos, Mike

    2008-01-01

    A rapid analytical procedure for the prediction of a micro-dosimeter response function in low Earth orbit (LEO), correlated with the Space Transportation System (STS, shuttle) Tissue Equivalent Proportional Counter (TEPC) measurements is presented. The analytical model takes into consideration the energy loss straggling and chord length distribution of the detector, and is capable of predicting energy deposition fluctuations in a cylindrical micro-volume of arbitrary aspect ratio (height/diameter) by incoming ions through both direct and indirect (ray) events. At any designated (ray traced) target point within the vehicle, the model accepts the differential flux spectrum of Galactic Cosmic Rays (GCR) and/or trapped protons at LEO as input. On a desktop PC, the response function of TEPC for each ion in the GCR/trapped field is computed at the average rate of 30 seconds/ion. The ionizing radiation environment at LEO is represented by O'Neill fs GCR model (2004), covering charged particles in the 1 less than or equal to Z less than or equal to 28. O'Neill's free space GCR model is coupled with the Langley Research Center (LaRC) angular dependent geomagnetic cutoff model to compute the transmission coefficient in LEO. The trapped proton environment is represented by a LaRC developed time dependent procedure which couples the AP8MIN/AP8MAX, Deep River Neutron Monitor (DRNM) and F10.7 solar radio frequency measurements. The albedo neutron environment is represented by the extrapolation of the Atmospheric Ionizing Radiation (AIR) measurements. The charged particle transport calculations correlated with STS 51 and 114 flights are accomplished by using the most recent version (2005) of the LaRC deterministic High charge (Z) and Energy TRaNsport (HZETRN) code. We present the correlations between the TEPC model predictions (response function) and TEPC measured differential/integral spectra in the lineal energy (y) domain for both GCR and trapped protons, with the conclusion

  11. Laboratory Analytical Procedures | Bioenergy | NREL

    Science.gov Websites

    analytical procedures (LAPs) to provide validated methods for biofuels and pyrolysis bio-oils research . Biomass Compositional Analysis These lab procedures provide tested and accepted methods for performing

  12. Predicting surgical site infection after spine surgery: a validated model using a prospective surgical registry.

    PubMed

    Lee, Michael J; Cizik, Amy M; Hamilton, Deven; Chapman, Jens R

    2014-09-01

    The impact of surgical site infection (SSI) is substantial. Although previous study has determined relative risk and odds ratio (OR) values to quantify risk factors, these values may be difficult to translate to the patient during counseling of surgical options. Ideally, a model that predicts absolute risk of SSI, rather than relative risk or OR values, would greatly enhance the discussion of safety of spine surgery. To date, there is no risk stratification model that specifically predicts the risk of medical complication. The purpose of this study was to create and validate a predictive model for the risk of SSI after spine surgery. This study performs a multivariate analysis of SSI after spine surgery using a large prospective surgical registry. Using the results of this analysis, this study will then create and validate a predictive model for SSI after spine surgery. The patient sample is from a high-quality surgical registry from our two institutions with prospectively collected, detailed demographic, comorbidity, and complication data. An SSI that required return to the operating room for surgical debridement. Using a prospectively collected surgical registry of more than 1,532 patients with extensive demographic, comorbidity, surgical, and complication details recorded for 2 years after the surgery, we identified several risk factors for SSI after multivariate analysis. Using the beta coefficients from those regression analyses, we created a model to predict the occurrence of SSI after spine surgery. We split our data into two subsets for internal and cross-validation of our model. We created a predictive model based on our beta coefficients from our multivariate analysis. The final predictive model for SSI had a receiver-operator curve characteristic of 0.72, considered to be a fair measure. The final model has been uploaded for use on SpineSage.com. We present a validated model for predicting SSI after spine surgery. The value in this model is that it gives

  13. Analytical expressions for the nonlinear interference in dispersion managed transmission coherent optical systems

    NASA Astrophysics Data System (ADS)

    Qiao, Yaojun; Li, Ming; Yang, Qiuhong; Xu, Yanfei; Ji, Yuefeng

    2015-01-01

    Closed-form expressions of nonlinear interference of dense wavelength-division-multiplexed (WDM) systems with dispersion managed transmission (DMT) are derived. We carry out a simulative validation by addressing an ample and significant set of the Nyquist-WDM systems based on polarization multiplexed quadrature phase-shift keying (PM-QPSK) subcarriers at a baud rate of 32 Gbaud per channel. Simulation results show the simple closed-form analytical expressions can provide an effective tool for the quick and accurate prediction of system performance in DMT coherent optical systems.

  14. A prediction algorithm for first onset of major depression in the general population: development and validation.

    PubMed

    Wang, JianLi; Sareen, Jitender; Patten, Scott; Bolton, James; Schmitz, Norbert; Birney, Arden

    2014-05-01

    Prediction algorithms are useful for making clinical decisions and for population health planning. However, such prediction algorithms for first onset of major depression do not exist. The objective of this study was to develop and validate a prediction algorithm for first onset of major depression in the general population. Longitudinal study design with approximate 3-year follow-up. The study was based on data from a nationally representative sample of the US general population. A total of 28 059 individuals who participated in Waves 1 and 2 of the US National Epidemiologic Survey on Alcohol and Related Conditions and who had not had major depression at Wave 1 were included. The prediction algorithm was developed using logistic regression modelling in 21 813 participants from three census regions. The algorithm was validated in participants from the 4th census region (n=6246). Major depression occurred since Wave 1 of the National Epidemiologic Survey on Alcohol and Related Conditions, assessed by the Alcohol Use Disorder and Associated Disabilities Interview Schedule-diagnostic and statistical manual for mental disorders IV. A prediction algorithm containing 17 unique risk factors was developed. The algorithm had good discriminative power (C statistics=0.7538, 95% CI 0.7378 to 0.7699) and excellent calibration (F-adjusted test=1.00, p=0.448) with the weighted data. In the validation sample, the algorithm had a C statistic of 0.7259 and excellent calibration (Hosmer-Lemeshow χ(2)=3.41, p=0.906). The developed prediction algorithm has good discrimination and calibration capacity. It can be used by clinicians, mental health policy-makers and service planners and the general public to predict future risk of having major depression. The application of the algorithm may lead to increased personalisation of treatment, better clinical decisions and more optimal mental health service planning.

  15. External Validation Study of First Trimester Obstetric Prediction Models (Expect Study I): Research Protocol and Population Characteristics.

    PubMed

    Meertens, Linda Jacqueline Elisabeth; Scheepers, Hubertina Cj; De Vries, Raymond G; Dirksen, Carmen D; Korstjens, Irene; Mulder, Antonius Lm; Nieuwenhuijze, Marianne J; Nijhuis, Jan G; Spaanderman, Marc Ea; Smits, Luc Jm

    2017-10-26

    A number of first-trimester prediction models addressing important obstetric outcomes have been published. However, most models have not been externally validated. External validation is essential before implementing a prediction model in clinical practice. The objective of this paper is to describe the design of a study to externally validate existing first trimester obstetric prediction models, based upon maternal characteristics and standard measurements (eg, blood pressure), for the risk of pre-eclampsia (PE), gestational diabetes mellitus (GDM), spontaneous preterm birth (PTB), small-for-gestational-age (SGA) infants, and large-for-gestational-age (LGA) infants among Dutch pregnant women (Expect Study I). The results of a pilot study on the feasibility and acceptability of the recruitment process and the comprehensibility of the Pregnancy Questionnaire 1 are also reported. A multicenter prospective cohort study was performed in The Netherlands between July 1, 2013 and December 31, 2015. First trimester obstetric prediction models were systematically selected from the literature. Predictor variables were measured by the Web-based Pregnancy Questionnaire 1 and pregnancy outcomes were established using the Postpartum Questionnaire 1 and medical records. Information about maternal health-related quality of life, costs, and satisfaction with Dutch obstetric care was collected from a subsample of women. A pilot study was carried out before the official start of inclusion. External validity of the models will be evaluated by assessing discrimination and calibration. Based on the pilot study, minor improvements were made to the recruitment process and online Pregnancy Questionnaire 1. The validation cohort consists of 2614 women. Data analysis of the external validation study is in progress. This study will offer insight into the generalizability of existing, non-invasive first trimester prediction models for various obstetric outcomes in a Dutch obstetric population

  16. Joint use of over- and under-sampling techniques and cross-validation for the development and assessment of prediction models.

    PubMed

    Blagus, Rok; Lusa, Lara

    2015-11-04

    Prediction models are used in clinical research to develop rules that can be used to accurately predict the outcome of the patients based on some of their characteristics. They represent a valuable tool in the decision making process of clinicians and health policy makers, as they enable them to estimate the probability that patients have or will develop a disease, will respond to a treatment, or that their disease will recur. The interest devoted to prediction models in the biomedical community has been growing in the last few years. Often the data used to develop the prediction models are class-imbalanced as only few patients experience the event (and therefore belong to minority class). Prediction models developed using class-imbalanced data tend to achieve sub-optimal predictive accuracy in the minority class. This problem can be diminished by using sampling techniques aimed at balancing the class distribution. These techniques include under- and oversampling, where a fraction of the majority class samples are retained in the analysis or new samples from the minority class are generated. The correct assessment of how the prediction model is likely to perform on independent data is of crucial importance; in the absence of an independent data set, cross-validation is normally used. While the importance of correct cross-validation is well documented in the biomedical literature, the challenges posed by the joint use of sampling techniques and cross-validation have not been addressed. We show that care must be taken to ensure that cross-validation is performed correctly on sampled data, and that the risk of overestimating the predictive accuracy is greater when oversampling techniques are used. Examples based on the re-analysis of real datasets and simulation studies are provided. We identify some results from the biomedical literature where the incorrect cross-validation was performed, where we expect that the performance of oversampling techniques was heavily

  17. Analytical Validation of the ReEBOV Antigen Rapid Test for Point-of-Care Diagnosis of Ebola Virus Infection

    PubMed Central

    Cross, Robert W.; Boisen, Matthew L.; Millett, Molly M.; Nelson, Diana S.; Oottamasathien, Darin; Hartnett, Jessica N.; Jones, Abigal B.; Goba, Augustine; Momoh, Mambu; Fullah, Mohamed; Bornholdt, Zachary A.; Fusco, Marnie L.; Abelson, Dafna M.; Oda, Shunichiro; Brown, Bethany L.; Pham, Ha; Rowland, Megan M.; Agans, Krystle N.; Geisbert, Joan B.; Heinrich, Megan L.; Kulakosky, Peter C.; Shaffer, Jeffrey G.; Schieffelin, John S.; Kargbo, Brima; Gbetuwa, Momoh; Gevao, Sahr M.; Wilson, Russell B.; Saphire, Erica Ollmann; Pitts, Kelly R.; Khan, Sheik Humarr; Grant, Donald S.; Geisbert, Thomas W.; Branco, Luis M.; Garry, Robert F.

    2016-01-01

    Background. Ebola virus disease (EVD) is a severe viral illness caused by Ebola virus (EBOV). The 2013–2016 EVD outbreak in West Africa is the largest recorded, with >11 000 deaths. Development of the ReEBOV Antigen Rapid Test (ReEBOV RDT) was expedited to provide a point-of-care test for suspected EVD cases. Methods. Recombinant EBOV viral protein 40 antigen was used to derive polyclonal antibodies for RDT and enzyme-linked immunosorbent assay development. ReEBOV RDT limits of detection (LOD), specificity, and interference were analytically validated on the basis of Food and Drug Administration (FDA) guidance. Results. The ReEBOV RDT specificity estimate was 95% for donor serum panels and 97% for donor whole-blood specimens. The RDT demonstrated sensitivity to 3 species of Ebolavirus (Zaire ebolavirus, Sudan ebolavirus, and Bundibugyo ebolavirus) associated with human disease, with no cross-reactivity by pathogens associated with non-EBOV febrile illness, including malaria parasites. Interference testing exhibited no reactivity by medications in common use. The LOD for antigen was 4.7 ng/test in serum and 9.4 ng/test in whole blood. Quantitative reverse transcription–polymerase chain reaction testing of nonhuman primate samples determined the range to be equivalent to 3.0 × 105–9.0 × 108 genomes/mL. Conclusions. The analytical validation presented here contributed to the ReEBOV RDT being the first antigen-based assay to receive FDA and World Health Organization emergency use authorization for this EVD outbreak, in February 2015. PMID:27587634

  18. On various metrics used for validation of predictive QSAR models with applications in virtual screening and focused library design.

    PubMed

    Roy, Kunal; Mitra, Indrani

    2011-07-01

    Quantitative structure-activity relationships (QSARs) have important applications in drug discovery research, environmental fate modeling, property prediction, etc. Validation has been recognized as a very important step for QSAR model development. As one of the important objectives of QSAR modeling is to predict activity/property/toxicity of new chemicals falling within the domain of applicability of the developed models and QSARs are being used for regulatory decisions, checking reliability of the models and confidence of their predictions is a very important aspect, which can be judged during the validation process. One prime application of a statistically significant QSAR model is virtual screening for molecules with improved potency based on the pharmacophoric features and the descriptors appearing in the QSAR model. Validated QSAR models may also be utilized for design of focused libraries which may be subsequently screened for the selection of hits. The present review focuses on various metrics used for validation of predictive QSAR models together with an overview of the application of QSAR models in the fields of virtual screening and focused library design for diverse series of compounds with citation of some recent examples.

  19. Prediction of prostate cancer in unscreened men: external validation of a risk calculator.

    PubMed

    van Vugt, Heidi A; Roobol, Monique J; Kranse, Ries; Määttänen, Liisa; Finne, Patrik; Hugosson, Jonas; Bangma, Chris H; Schröder, Fritz H; Steyerberg, Ewout W

    2011-04-01

    Prediction models need external validation to assess their value beyond the setting where the model was derived from. To assess the external validity of the European Randomized study of Screening for Prostate Cancer (ERSPC) risk calculator (www.prostatecancer-riskcalculator.com) for the probability of having a positive prostate biopsy (P(posb)). The ERSPC risk calculator was based on data of the initial screening round of the ERSPC section Rotterdam and validated in 1825 and 531 men biopsied at the initial screening round in the Finnish and Swedish sections of the ERSPC respectively. P(posb) was calculated using serum prostate specific antigen (PSA), outcome of digital rectal examination (DRE), transrectal ultrasound and ultrasound assessed prostate volume. The external validity was assessed for the presence of cancer at biopsy by calibration (agreement between observed and predicted outcomes), discrimination (separation of those with and without cancer), and decision curves (for clinical usefulness). Prostate cancer was detected in 469 men (26%) of the Finnish cohort and in 124 men (23%) of the Swedish cohort. Systematic miscalibration was present in both cohorts (mean predicted probability 34% versus 26% observed, and 29% versus 23% observed, both p<0.001). The areas under the curves were 0.76 and 0.78, and substantially lower for the model with PSA only (0.64 and 0.68 respectively). The model proved clinically useful for any decision threshold compared with a model with PSA only, PSA and DRE, or biopsying all men. A limitation is that the model is based on sextant biopsies results. The ERSPC risk calculator discriminated well between those with and without prostate cancer among initially screened men, but overestimated the risk of a positive biopsy. Further research is necessary to assess the performance and applicability of the ERSPC risk calculator when a clinical setting is considered rather than a screening setting. Copyright © 2010 Elsevier Ltd. All rights

  20. A Quantitative Structure Activity Relationship for acute oral toxicity of pesticides on rats: Validation, domain of application and prediction.

    PubMed

    Hamadache, Mabrouk; Benkortbi, Othmane; Hanini, Salah; Amrane, Abdeltif; Khaouane, Latifa; Si Moussa, Cherif

    2016-02-13

    Quantitative Structure Activity Relationship (QSAR) models are expected to play an important role in the risk assessment of chemicals on humans and the environment. In this study, we developed a validated QSAR model to predict acute oral toxicity of 329 pesticides to rats because a few QSAR models have been devoted to predict the Lethal Dose 50 (LD50) of pesticides on rats. This QSAR model is based on 17 molecular descriptors, and is robust, externally predictive and characterized by a good applicability domain. The best results were obtained with a 17/9/1 Artificial Neural Network model trained with the Quasi Newton back propagation (BFGS) algorithm. The prediction accuracy for the external validation set was estimated by the Q(2)ext and the root mean square error (RMS) which are equal to 0.948 and 0.201, respectively. 98.6% of external validation set is correctly predicted and the present model proved to be superior to models previously published. Accordingly, the model developed in this study provides excellent predictions and can be used to predict the acute oral toxicity of pesticides, particularly for those that have not been tested as well as new pesticides. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. The MCNP6 Analytic Criticality Benchmark Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling)more » and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.« less

  2. Analytical method for predicting the pressure distribution about a nacelle at transonic speeds

    NASA Technical Reports Server (NTRS)

    Keith, J. S.; Ferguson, D. R.; Merkle, C. L.; Heck, P. H.; Lahti, D. J.

    1973-01-01

    The formulation and development of a computer analysis for the calculation of streamlines and pressure distributions around two-dimensional (planar and axisymmetric) isolated nacelles at transonic speeds are described. The computerized flow field analysis is designed to predict the transonic flow around long and short high-bypass-ratio fan duct nacelles with inlet flows and with exhaust flows having appropriate aerothermodynamic properties. The flow field boundaries are located as far upstream and downstream as necessary to obtain minimum disturbances at the boundary. The far-field lateral flow field boundary is analytically defined to exactly represent free-flight conditions or solid wind tunnel wall effects. The inviscid solution technique is based on a Streamtube Curvature Analysis. The computer program utilizes an automatic grid refinement procedure and solves the flow field equations with a matrix relaxation technique. The boundary layer displacement effects and the onset of turbulent separation are included, based on the compressible turbulent boundary layer solution method of Stratford and Beavers and on the turbulent separation prediction method of Stratford.

  3. Analytical prediction of sub-surface thermal history in translucent tissue phantoms during plasmonic photo-thermotherapy (PPTT).

    PubMed

    Dhar, Purbarun; Paul, Anup; Narasimhan, Arunn; Das, Sarit K

    2016-12-01

    Knowledge of thermal history and/or distribution in biological tissues during laser based hyperthermia is essential to achieve necrosis of tumour/carcinoma cells. A semi-analytical model to predict sub-surface thermal distribution in translucent, soft, tissue mimics has been proposed. The model can accurately predict the spatio-temporal temperature variations along depth and the anomalous thermal behaviour in such media, viz. occurrence of sub-surface temperature peaks. Based on optical and thermal properties, the augmented temperature and shift of the peak positions in case of gold nanostructure mediated tissue phantom hyperthermia can be predicted. Employing inverse approach, the absorption coefficient of nano-graphene infused tissue mimics is determined from the peak temperature and found to provide appreciably accurate predictions along depth. Furthermore, a simplistic, dimensionally consistent correlation to theoretically determine the position of the peak in such media is proposed and found to be consistent with experiments and computations. The model shows promise in predicting thermal distribution induced by lasers in tissues and deduction of therapeutic hyperthermia parameters, thereby assisting clinical procedures by providing a priori estimates. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Incremental Validity of Useful Field of View Subtests for the Prediction of Instrumental Activities of Daily Living

    PubMed Central

    Aust, Frederik; Edwards, Jerri D.

    2015-01-01

    Introduction The Useful Field of View Test (UFOV®) is a cognitive measure that predicts older adults’ ability to perform a range of everyday activities. However, little is known about the individual contribution of each subtest to these predictions and the underlying constructs of UFOV performance remain a topic of debate. Method We investigated the incremental validity of UFOV subtests for the prediction of Instrumental Activities of Daily Living (IADL) performance in two independent datasets, the SKILL (n = 828) and ACTIVE (n = 2426) studies. We, then, explored the cognitive and visual abilities assessed by UFOV using a range of neuropsychological and vision tests administered in the SKILL study. Results In the four subtest variant of UFOV, only subtests 2 and 3 consistently made independent contributions to the prediction of IADL performance across three different behavioral measures. In all cases, the incremental validity of UFOV subtests 1 and 4 was negligible. Furthermore, we found that UFOV was related to processing speed, general non-speeded cognition, and visual function; the omission of subtests 1 and 4 from the test score did not affect these associations. Conclusions UFOV subtests 1 and 4 appear to be of limited use to predict IADL and possibly other everyday activities. Future experimental research should investigate if shortening the UFOV by omitting these subtests is a reliable and valid assessment approach. PMID:26782018

  5. Bimodal fuzzy analytic hierarchy process (BFAHP) for coronary heart disease risk assessment.

    PubMed

    Sabahi, Farnaz

    2018-04-04

    Rooted deeply in medical multiple criteria decision-making (MCDM), risk assessment is very important especially when applied to the risk of being affected by deadly diseases such as coronary heart disease (CHD). CHD risk assessment is a stochastic, uncertain, and highly dynamic process influenced by various known and unknown variables. In recent years, there has been a great interest in fuzzy analytic hierarchy process (FAHP), a popular methodology for dealing with uncertainty in MCDM. This paper proposes a new FAHP, bimodal fuzzy analytic hierarchy process (BFAHP) that augments two aspects of knowledge, probability and validity, to fuzzy numbers to better deal with uncertainty. In BFAHP, fuzzy validity is computed by aggregating the validities of relevant risk factors based on expert knowledge and collective intelligence. By considering both soft and statistical data, we compute the fuzzy probability of risk factors using the Bayesian formulation. In BFAHP approach, these fuzzy validities and fuzzy probabilities are used to construct a reciprocal comparison matrix. We then aggregate fuzzy probabilities and fuzzy validities in a pairwise manner for each risk factor and each alternative. BFAHP decides about being affected and not being affected by ranking of high and low risks. For evaluation, the proposed approach is applied to the risk of being affected by CHD using a real dataset of 152 patients of Iranian hospitals. Simulation results confirm that adding validity in a fuzzy manner can accrue more confidence of results and clinically useful especially in the face of incomplete information when compared with actual results. Applying the proposed BFAHP on CHD risk assessment of the dataset, it yields high accuracy rate above 85% for correct prediction. In addition, this paper recognizes that the risk factors of diastolic blood pressure in men and high-density lipoprotein in women are more important in CHD than other risk factors. Copyright © 2018 Elsevier Inc. All

  6. Cross-validation of oxygen uptake prediction during walking in ambulatory persons with multiple sclerosis.

    PubMed

    Agiovlasitis, Stamatis; Motl, Robert W

    2016-01-01

    An equation for predicting the gross oxygen uptake (gross-VO2) during walking for persons with multiple sclerosis (MS) has been developed. Predictors included walking speed and total score from the 12-Item Multiple Sclerosis Walking Scale (MSWS-12). This study examined the validity of this prediction equation in another sample of persons with MS. Participants were 18 persons with MS with limited mobility problems (42 ± 13 years; 14 women). Participants completed the MSWS-12. Gross-VO2 was measured with open-circuit spirometry during treadmill walking at 2.0, 3.0, and 4.0 mph (0.89, 1.34, and 1.79 m·s(-1)). Absolute percent error was small: 8.3 ± 6.1% , 8.0 ± 5.6% , and 12.2 ± 9.0% at 2.0, 3.0, and 4.0 mph, respectively. Actual gross-VO2 did not differ significantly from predicted gross-VO2 at 2.0 and 3.0 mph, but was significantly higher than predicted gross-VO2 at 4.0 mph (p <  0.001). Bland-Altman plots indicated nearly zero mean difference between actual and predicted gross-VO2 with modest 95% confidence intervals at 2.0 and 3.0 mph, but there was some underestimation at 4.0 mph. Speed and MSWS-12 score provide valid prediction of gross-VO2 during treadmill walking at slow and moderate speeds in ambulatory persons with MS. However, there is a possibility of small underestimation for walking at 4.0 mph.

  7. Development and Validation of a Multidisciplinary Tool for Accurate and Efficient Rotorcraft Noise Prediction (MUTE)

    NASA Technical Reports Server (NTRS)

    Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris

    2011-01-01

    A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.

  8. Medication information leaflets for patients: the further validation of an analytic linguistic framework.

    PubMed

    Clerehan, Rosemary; Hirsh, Di; Buchbinder, Rachelle

    2009-01-01

    While clinicians may routinely use patient information leaflets about drug therapy, a poorly conceived leaflet has the potential to do harm. We previously developed a novel approach to analysing leaflets about a rheumatoid arthritis drug, using an analytic approach based on systemic functional linguistics. The aim of the present study was to verify the validity of the linguistic framework by applying it to two further arthritis drug leaflets. The findings confirmed the applicability of the framework and were used to refine it. A new stage or 'move' in the genre was identified. While the function of many of the moves appeared to be 'to instruct' the patient, the instruction was often unclear. The role relationships expressed in the text were critical to the meaning. As with our previous study, judged on their lexical density, the leaflets resembled academic text. The framework can provide specific tools to assess and produce medication information leaflets to support readers in taking medication. Future work could utilize the framework to evaluate information on other treatments and procedures or on healthcare information more widely.

  9. Predictive modeling of complications.

    PubMed

    Osorio, Joseph A; Scheer, Justin K; Ames, Christopher P

    2016-09-01

    Predictive analytic algorithms are designed to identify patterns in the data that allow for accurate predictions without the need for a hypothesis. Therefore, predictive modeling can provide detailed and patient-specific information that can be readily applied when discussing the risks of surgery with a patient. There are few studies using predictive modeling techniques in the adult spine surgery literature. These types of studies represent the beginning of the use of predictive analytics in spine surgery outcomes. We will discuss the advancements in the field of spine surgery with respect to predictive analytics, the controversies surrounding the technique, and the future directions.

  10. Derivation and Validation of a Clostridium difficile Infection Recurrence Prediction Rule in a National Cohort of Veterans.

    PubMed

    Reveles, Kelly R; Mortensen, Eric M; Koeller, Jim M; Lawson, Kenneth A; Pugh, Mary Jo V; Rumbellow, Sarah A; Argamany, Jacqueline R; Frei, Christopher R

    2018-03-01

    Prior studies have identified risk factors for recurrent Clostridium difficile infection (CDI), but few studies have integrated these factors into a clinical prediction rule that can aid clinical decision-making. The objectives of this study were to derive and validate a CDI recurrence prediction rule to identify patients at risk for first recurrence in a national cohort of veterans. Retrospective cohort study. Veterans Affairs Informatics and Computing Infrastructure. A total of 22,615 adult Veterans Health Administration beneficiaries with first-episode CDI between October 1, 2002, and September 30, 2014; of these patients, 7538 were assigned to the derivation cohort and 15,077 to the validation cohort. A 60-day CDI recurrence prediction rule was created in a derivation cohort using backward logistic regression. Those variables significant at p<0.01 were assigned an integer score proportional to the regression coefficient. The model was then validated in the derivation cohort and a separate validation cohort. Patients were then split into three risk categories, and rates of recurrence were described for each category. The CDI recurrence prediction rule included the following predictor variables with their respective point values: prior third- and fourth-generation cephalosporins (1 point), prior proton pump inhibitors (1 point), prior antidiarrheals (1 point), nonsevere CDI (2 points), and community-onset CDI (3 points). In the derivation cohort, the 60-day CDI recurrence risk for each score ranged from 7.5% (0 points) to 57.9% (8 points). The risk score was strongly correlated with recurrence (R 2  = 0.94). Patients were split into low-risk (0-2 points), medium-risk (3-5 points), and high-risk (6-8 points) classes and had the following recurrence rates: 8.9%, 20.2%, and 35.0%, respectively. Findings were similar in the validation cohort. Several CDI and patient-specific factors were independently associated with 60-day CDI recurrence risk. When integrated into

  11. Predicting plant uptake of cadmium: validated with long-term contaminated soils.

    PubMed

    Lamb, Dane T; Kader, Mohammed; Ming, Hui; Wang, Liang; Abbasi, Sedigheh; Megharaj, Mallavarapu; Naidu, Ravi

    2016-10-01

    Cadmium accumulates in plant tissues at low soil loadings and is a concern for human health. Yet at higher levels it is also of concern for ecological receptors. We determined Cd partitioning constants for 41 soils to examine the role of soil properties controlling Cd partitioning and plant uptake. From a series of sorption and dose response studies, transfer functions were developed for predicting Cd uptake in Cucumis sativa L. (cucumber). The parameter log K f was predicted with soil pH ca , logCEC and log OC. Transfer of soil pore-water Cd 2+ to shoots was described with a power function (R 2  = 0.73). The dataset was validated with 13 long-term contaminated soils (plus 2 control soils) ranging in Cd concentration from 0.2 to 300 mg kg -1 . The series of equations predicting Cd shoot from pore-water Cd 2+ were able to predict the measured data in the independent dataset (root mean square error = 2.2). The good relationship indicated that Cd uptake to cucumber shoots could be predicted with Cd pore and Cd 2+ without other pore-water parameters such as pH or Ca 2+ . The approach may be adapted to a range of plant species.

  12. Nomogram predicting response after chemoradiotherapy in rectal cancer using sequential PETCT imaging: a multicentric prospective study with external validation.

    PubMed

    van Stiphout, Ruud G P M; Valentini, Vincenzo; Buijsen, Jeroen; Lammering, Guido; Meldolesi, Elisa; van Soest, Johan; Leccisotti, Lucia; Giordano, Alessandro; Gambacorta, Maria A; Dekker, Andre; Lambin, Philippe

    2014-11-01

    To develop and externally validate a predictive model for pathologic complete response (pCR) for locally advanced rectal cancer (LARC) based on clinical features and early sequential (18)F-FDG PETCT imaging. Prospective data (i.a. THUNDER trial) were used to train (N=112, MAASTRO Clinic) and validate (N=78, Università Cattolica del S. Cuore) the model for pCR (ypT0N0). All patients received long-course chemoradiotherapy (CRT) and surgery. Clinical parameters were age, gender, clinical tumour (cT) stage and clinical nodal (cN) stage. PET parameters were SUVmax, SUVmean, metabolic tumour volume (MTV) and maximal tumour diameter, for which response indices between pre-treatment and intermediate scan were calculated. Using multivariate logistic regression, three probability groups for pCR were defined. The pCR rates were 21.4% (training) and 23.1% (validation). The selected predictive features for pCR were cT-stage, cN-stage, response index of SUVmean and maximal tumour diameter during treatment. The models' performances (AUC) were 0.78 (training) and 0.70 (validation). The high probability group for pCR resulted in 100% correct predictions for training and 67% for validation. The model is available on the website www.predictcancer.org. The developed predictive model for pCR is accurate and externally validated. This model may assist in treatment decisions during CRT to select complete responders for a wait-and-see policy, good responders for extra RT boost and bad responders for additional chemotherapy. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  13. Cross Cultural Adaptation, Validity, and Reliability of the Farsi Breastfeeding Attrition Prediction Tools in Iranian Pregnant Women

    PubMed Central

    Mortazavi, Forough; Mousavi, Seyed Abbas; Chaman, Reza; Khosravi, Ahmad; Janke, Jill R.

    2015-01-01

    Background: The rate of exclusive breastfeeding in Iran is decreasing. The breastfeeding attrition prediction tools (BAPT) have been validated and used in predicting premature weaning. Objectives: We aimed to translate the BAPT into Farsi, assess its content validity, and examine its reliability and validity to identify exclusive breastfeeding discontinuation in Iran. Materials and Methods: The BAPT was translated into Farsi and the content validity of the Farsi version of the BAPT was assessed. It was administered to 356 pregnant women in the third trimester of pregnancy, who were residents of a city in northeast of Iran. The structural integrity of the four-factor model was assessed in confirmatory factor analysis (CFA) and exploratory factor analysis (EFA). Reliability was assessed using Cronbach’s alpha coefficient and item-subscale correlations. Validity was assessed using the known-group comparison (128 with vs. 228 without breastfeeding experience) and predictive validity (80 successes vs. 265 failures in exclusive breastfeeding). Results: The internal consistency of the whole instrument (49 items) was 0.775. CFA provided an acceptable fit to the a priori four-factor model (Chi-square/df = 1.8, Root Mean Square Error of Approximation (RMSEA) = 0.049, Standardized Root Mean Square Residual (SRMR) = 0.064, Comparative Fit Index (CFI) = 0.911). The difference in means of breastfeeding control (BFC) between the participants with and without breastfeeding experience was significant (P < 0.001). In addition, the total score of BAPT and the score of Breast Feeding Control (BFC) subscale were higher in women who were on exclusive breastfeeding than women who were not, at four months postpartum (P < 0.05). Conclusions: This study validated the Farsi version of BAPT. It is useful for researchers who want to use it in Iran to identify women at higher risks of Exclusive Breast Feeding (EBF) discontinuation. PMID:26019910

  14. How Nonrecidivism Affects Predictive Accuracy: Evidence from a Cross-Validation of the Ontario Domestic Assault Risk Assessment (ODARA)

    ERIC Educational Resources Information Center

    Hilton, N. Zoe; Harris, Grant T.

    2009-01-01

    Prediction effect sizes such as ROC area are important for demonstrating a risk assessment's generalizability and utility. How a study defines recidivism might affect predictive accuracy. Nonrecidivism is problematic when predicting specialized violence (e.g., domestic violence). The present study cross-validates the ability of the Ontario…

  15. Tone Noise Predictions for a Spacecraft Cabin Ventilation Fan Ingesting Distorted Inflow and the Challenges of Validation

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Shook, Tony D.; Astler, Douglas T.; Bittinger, Samantha A.

    2011-01-01

    A fan tone noise prediction code has been developed at NASA Glenn Research Center that is capable of estimating duct mode sound power levels for a fan ingesting distorted inflow. This code was used to predict the circumferential and radial mode sound power levels in the inlet and exhaust duct of an axial spacecraft cabin ventilation fan. Noise predictions at fan design rotational speed were generated. Three fan inflow conditions were studied: an undistorted inflow, a circumferentially symmetric inflow distortion pattern (cylindrical rods inserted radially into the flowpath at 15deg, 135deg, and 255deg), and a circumferentially asymmetric inflow distortion pattern (rods located at 15deg, 52deg and 173deg). Noise predictions indicate that tones are produced for the distorted inflow cases that are not present when the fan operates with an undistorted inflow. Experimental data are needed to validate these acoustic predictions, as well as the aerodynamic performance predictions. Given the aerodynamic design of the spacecraft cabin ventilation fan, a mechanical and electrical conceptual design study was conducted. Design features of a fan suitable for obtaining detailed acoustic and aerodynamic measurements needed to validate predictions are discussed.

  16. Tone Noise Predictions for a Spacecraft Cabin Ventilation Fan Ingesting Distorted Inflow and the Challenges of Validation

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Shook, Tony D.; Astler, Douglas T.; Bittinger, Samantha A.

    2012-01-01

    A fan tone noise prediction code has been developed at NASA Glenn Research Center that is capable of estimating duct mode sound power levels for a fan ingesting distorted inflow. This code was used to predict the circumferential and radial mode sound power levels in the inlet and exhaust duct of an axial spacecraft cabin ventilation fan. Noise predictions at fan design rotational speed were generated. Three fan inflow conditions were studied: an undistorted inflow, a circumferentially symmetric inflow distortion pattern (cylindrical rods inserted radially into the flowpath at 15deg, 135deg, and 255deg), and a circumferentially asymmetric inflow distortion pattern (rods located at 15deg, 52deg and 173deg). Noise predictions indicate that tones are produced for the distorted inflow cases that are not present when the fan operates with an undistorted inflow. Experimental data are needed to validate these acoustic predictions, as well as the aerodynamic performance predictions. Given the aerodynamic design of the spacecraft cabin ventilation fan, a mechanical and electrical conceptual design study was conducted. Design features of a fan suitable for obtaining detailed acoustic and aerodynamic measurements needed to validate predictions are discussed.

  17. Developing and Validating a Survival Prediction Model for NSCLC Patients Through Distributed Learning Across 3 Countries.

    PubMed

    Jochems, Arthur; Deist, Timo M; El Naqa, Issam; Kessler, Marc; Mayo, Chuck; Reeves, Jackson; Jolly, Shruti; Matuszak, Martha; Ten Haken, Randall; van Soest, Johan; Oberije, Cary; Faivre-Finn, Corinne; Price, Gareth; de Ruysscher, Dirk; Lambin, Philippe; Dekker, Andre

    2017-10-01

    Tools for survival prediction for non-small cell lung cancer (NSCLC) patients treated with chemoradiation or radiation therapy are of limited quality. In this work, we developed a predictive model of survival at 2 years. The model is based on a large volume of historical patient data and serves as a proof of concept to demonstrate the distributed learning approach. Clinical data from 698 lung cancer patients, treated with curative intent with chemoradiation or radiation therapy alone, were collected and stored at 2 different cancer institutes (559 patients at Maastro clinic (Netherlands) and 139 at Michigan university [United States]). The model was further validated on 196 patients originating from The Christie (United Kingdon). A Bayesian network model was adapted for distributed learning (the animation can be viewed at https://www.youtube.com/watch?v=ZDJFOxpwqEA). Two-year posttreatment survival was chosen as the endpoint. The Maastro clinic cohort data are publicly available at https://www.cancerdata.org/publication/developing-and-validating-survival-prediction-model-nsclc-patients-through-distributed, and the developed models can be found at www.predictcancer.org. Variables included in the final model were T and N category, age, performance status, and total tumor dose. The model has an area under the curve (AUC) of 0.66 on the external validation set and an AUC of 0.62 on a 5-fold cross validation. A model based on the T and N category performed with an AUC of 0.47 on the validation set, significantly worse than our model (P<.001). Learning the model in a centralized or distributed fashion yields a minor difference on the probabilities of the conditional probability tables (0.6%); the discriminative performance of the models on the validation set is similar (P=.26). Distributed learning from federated databases allows learning of predictive models on data originating from multiple institutions while avoiding many of the data-sharing barriers. We believe

  18. Prediction and validation of residual feed intake and dry matter intake in Danish lactating dairy cows using mid-infrared spectroscopy of milk.

    PubMed

    Shetty, N; Løvendahl, P; Lund, M S; Buitenhuis, A J

    2017-01-01

    The present study explored the effectiveness of Fourier transform mid-infrared (FT-IR) spectral profiles as a predictor for dry matter intake (DMI) and residual feed intake (RFI). The partial least squares regression method was used to develop the prediction models. The models were validated using different external test sets, one randomly leaving out 20% of the records (validation A), the second randomly leaving out 20% of cows (validation B), and a third (for DMI prediction models) randomly leaving out one cow (validation C). The data included 1,044 records from 140 cows; 97 were Danish Holstein and 43 Danish Jersey. Results showed better accuracies for validation A compared with other validation methods. Milk yield (MY) contributed largely to DMI prediction; MY explained 59% of the variation and the validated model error root mean square error of prediction (RMSEP) was 2.24kg. The model was improved by adding live weight (LW) as an additional predictor trait, where the accuracy R 2 increased from 0.59 to 0.72 and error RMSEP decreased from 2.24 to 1.83kg. When only the milk FT-IR spectral profile was used in DMI prediction, a lower prediction ability was obtained, with R 2 =0.30 and RMSEP=2.91kg. However, once the spectral information was added, along with MY and LW as predictors, model accuracy improved and R 2 increased to 0.81 and RMSEP decreased to 1.49kg. Prediction accuracies of RFI changed throughout lactation. The RFI prediction model for the early-lactation stage was better compared with across lactation or mid- and late-lactation stages, with R 2 =0.46 and RMSEP=1.70. The most important spectral wavenumbers that contributed to DMI and RFI prediction models included fat, protein, and lactose peaks. Comparable prediction results were obtained when using infrared-predicted fat, protein, and lactose instead of full spectra, indicating that FT-IR spectral data do not add significant new information to improve DMI and RFI prediction models. Therefore, in

  19. External validation of a nomogram for prediction of side-specific extracapsular extension at robotic radical prostatectomy.

    PubMed

    Zorn, Kevin C; Gallina, Andrea; Hutterer, Georg C; Walz, Jochen; Shalhav, Arieh L; Zagaja, Gregory P; Valiquette, Luc; Gofrit, Ofer N; Orvieto, Marcelo A; Taxy, Jerome B; Karakiewicz, Pierre I

    2007-11-01

    Several staging tools have been developed for open radical prostatectomy (ORP) patients. However, the validity of these tools has never been formally tested in patients treated with robot-assisted laparoscopic radical prostatectomy (RALP). We tested the accuracy of an ORP-derived nomogram in predicting the rate of extracapsular extension (ECE) in a large RALP cohort. Serum prostate specific antigen (PSA) and side-specific clinical stage and biopsy Gleason sum information were used in a previously validated nomogram predicting side-specific ECE. The nomogram-derived predictions were compared with the observed rate of ECE, and the accuracy of the predictions was quantified. Each prostate lobe was analyzed independently. As complete data were available for 576 patients, the analyses targeted 1152 prostate lobes. Median age and serum PSA concentration at radical prostatectomy were 60 years and 5.4 ng/mL, respectively. The majority of side-specific clinical stages were T(1c) (993; 86.2%). Most side-specific biopsy Gleason sums were 6 (572; 49.7%). The median side-specific percentages of positive cores and of cancer were, respectively, 20.0% and 5.0%. At final pathologic review, 107 patients (18.6%) had ECE, and side-specific ECE was present in 117 patients (20.3%). The nomogram was 89% accurate in the RALP cohort v 84% in the previously reported ORP validation. The ORP side-specific ECE nomogram is highly accurate in the RALP population, suggesting that predictive and possibly prognostic tools developed in ORP patients may be equally accurate in their RALP counterparts.

  20. On the incremental validity of irrational beliefs to predict subjective well-being while controlling for personality factors.

    PubMed

    Spörrle, Matthias; Strobel, Maria; Tumasjan, Andranik

    2010-11-01

    This research examines the incremental validity of irrational thinking as conceptualized by Albert Ellis to predict diverse aspects of subjective well-being while controlling for the influence of personality factors. Rational-emotive behavior therapy (REBT) argues that irrational beliefs result in maladaptive emotions leading to reduced well-being. Although there is some early scientific evidence for this relation, it has never been investigated whether this connection would still persist when statistically controlling for the Big Five personality factors, which were consistently found to be important determinants of well-being. Regression analyses revealed significant incremental validity of irrationality over personality factors when predicting life satisfaction, but not when predicting subjective happiness. Results are discussed with respect to conceptual differences between these two aspects of subjective well-being.